Nov 8 00:02:30.194922 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:02:30.194945 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:02:30.194953 kernel: KASLR enabled Nov 8 00:02:30.194960 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 8 00:02:30.194967 kernel: printk: bootconsole [pl11] enabled Nov 8 00:02:30.194973 kernel: efi: EFI v2.7 by EDK II Nov 8 00:02:30.194980 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Nov 8 00:02:30.194986 kernel: random: crng init done Nov 8 00:02:30.194992 kernel: ACPI: Early table checksum verification disabled Nov 8 00:02:30.194998 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Nov 8 00:02:30.195005 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195011 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195018 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 8 00:02:30.195024 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195032 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195038 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195045 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195053 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195059 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195065 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 8 00:02:30.195072 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195078 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 8 00:02:30.195085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 8 00:02:30.195091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Nov 8 00:02:30.195097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Nov 8 00:02:30.195104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Nov 8 00:02:30.195110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Nov 8 00:02:30.195117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Nov 8 00:02:30.195125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Nov 8 00:02:30.195131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Nov 8 00:02:30.195137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Nov 8 00:02:30.195144 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Nov 8 00:02:30.195150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Nov 8 00:02:30.195156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Nov 8 00:02:30.195163 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Nov 8 00:02:30.195169 kernel: Zone ranges: Nov 8 00:02:30.195175 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 8 00:02:30.195182 kernel: DMA32 empty Nov 8 00:02:30.195188 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:02:30.195195 kernel: Movable zone start for each node Nov 8 00:02:30.195205 kernel: Early memory node ranges Nov 8 00:02:30.195212 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 8 00:02:30.195219 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Nov 8 00:02:30.195226 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Nov 8 00:02:30.195232 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Nov 8 00:02:30.195240 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Nov 8 00:02:30.195247 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Nov 8 00:02:30.195254 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:02:30.195261 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 8 00:02:30.195268 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 8 00:02:30.195275 kernel: psci: probing for conduit method from ACPI. Nov 8 00:02:30.195281 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:02:30.195288 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:02:30.195295 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 8 00:02:30.195302 kernel: psci: SMC Calling Convention v1.4 Nov 8 00:02:30.195309 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 8 00:02:30.195315 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 8 00:02:30.195324 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:02:30.195331 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:02:30.195338 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:02:30.195344 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:02:30.195351 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:02:30.195358 kernel: CPU features: detected: Hardware dirty bit management Nov 8 00:02:30.195365 kernel: CPU features: detected: Spectre-BHB Nov 8 00:02:30.195372 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:02:30.195378 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:02:30.195385 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:02:30.195392 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Nov 8 00:02:30.195400 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:02:30.197447 kernel: alternatives: applying boot alternatives Nov 8 00:02:30.197458 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:02:30.197466 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:02:30.197473 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:02:30.197481 kernel: Fallback order for Node 0: 0 Nov 8 00:02:30.197488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Nov 8 00:02:30.197495 kernel: Policy zone: Normal Nov 8 00:02:30.197502 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:02:30.197509 kernel: software IO TLB: area num 2. Nov 8 00:02:30.197517 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Nov 8 00:02:30.197532 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Nov 8 00:02:30.197540 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:02:30.197548 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:02:30.197557 kernel: rcu: RCU event tracing is enabled. Nov 8 00:02:30.197565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:02:30.197574 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:02:30.197582 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:02:30.197590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:02:30.197598 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:02:30.197606 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:02:30.197614 kernel: GICv3: 960 SPIs implemented Nov 8 00:02:30.197624 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:02:30.197632 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:02:30.197640 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 8 00:02:30.197648 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 8 00:02:30.197656 kernel: ITS: No ITS available, not enabling LPIs Nov 8 00:02:30.197665 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:02:30.197673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:02:30.197680 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:02:30.197687 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:02:30.197694 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:02:30.197702 kernel: Console: colour dummy device 80x25 Nov 8 00:02:30.197711 kernel: printk: console [tty1] enabled Nov 8 00:02:30.197718 kernel: ACPI: Core revision 20230628 Nov 8 00:02:30.197727 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:02:30.197735 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:02:30.197743 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:02:30.197752 kernel: landlock: Up and running. Nov 8 00:02:30.197760 kernel: SELinux: Initializing. Nov 8 00:02:30.197768 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.197777 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.197787 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:02:30.197796 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:02:30.197805 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Nov 8 00:02:30.197812 kernel: Hyper-V: Host Build 10.0.26100.1382-1-0 Nov 8 00:02:30.197819 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:02:30.197826 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:02:30.197834 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:02:30.197841 kernel: Remapping and enabling EFI services. Nov 8 00:02:30.197855 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:02:30.197862 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:02:30.197869 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 8 00:02:30.197877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:02:30.197886 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:02:30.197893 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:02:30.197901 kernel: SMP: Total of 2 processors activated. Nov 8 00:02:30.197908 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:02:30.197916 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 8 00:02:30.197925 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:02:30.197933 kernel: CPU features: detected: CRC32 instructions Nov 8 00:02:30.197941 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:02:30.197948 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:02:30.197956 kernel: CPU features: detected: Privileged Access Never Nov 8 00:02:30.197963 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:02:30.197970 kernel: alternatives: applying system-wide alternatives Nov 8 00:02:30.197978 kernel: devtmpfs: initialized Nov 8 00:02:30.197985 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:02:30.197994 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:02:30.198002 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:02:30.198009 kernel: SMBIOS 3.1.0 present. Nov 8 00:02:30.198017 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 8 00:02:30.198025 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:02:30.198032 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:02:30.198040 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:02:30.198047 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:02:30.198055 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:02:30.198064 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Nov 8 00:02:30.198072 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:02:30.198079 kernel: cpuidle: using governor menu Nov 8 00:02:30.198086 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:02:30.198094 kernel: ASID allocator initialised with 32768 entries Nov 8 00:02:30.198101 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:02:30.198109 kernel: Serial: AMBA PL011 UART driver Nov 8 00:02:30.198116 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:02:30.198124 kernel: Modules: 0 pages in range for non-PLT usage Nov 8 00:02:30.198133 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:02:30.198140 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:02:30.198148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:02:30.198155 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:02:30.198163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:02:30.198170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:02:30.198177 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:02:30.198185 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:02:30.198192 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:02:30.198201 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:02:30.198209 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:02:30.198216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:02:30.198223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:02:30.198231 kernel: ACPI: Interpreter enabled Nov 8 00:02:30.198238 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:02:30.198246 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:02:30.198253 kernel: printk: console [ttyAMA0] enabled Nov 8 00:02:30.198260 kernel: printk: bootconsole [pl11] disabled Nov 8 00:02:30.198270 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 8 00:02:30.198277 kernel: iommu: Default domain type: Translated Nov 8 00:02:30.198284 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:02:30.198292 kernel: efivars: Registered efivars operations Nov 8 00:02:30.198299 kernel: vgaarb: loaded Nov 8 00:02:30.198309 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:02:30.198318 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:02:30.198325 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:02:30.198332 kernel: pnp: PnP ACPI init Nov 8 00:02:30.198342 kernel: pnp: PnP ACPI: found 0 devices Nov 8 00:02:30.198350 kernel: NET: Registered PF_INET protocol family Nov 8 00:02:30.198359 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:02:30.198367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:02:30.198375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:02:30.198382 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:02:30.198390 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:02:30.198399 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:02:30.198412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.198422 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.198430 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:02:30.198438 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:02:30.198446 kernel: kvm [1]: HYP mode not available Nov 8 00:02:30.198453 kernel: Initialise system trusted keyrings Nov 8 00:02:30.198460 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:02:30.198468 kernel: Key type asymmetric registered Nov 8 00:02:30.198476 kernel: Asymmetric key parser 'x509' registered Nov 8 00:02:30.198483 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:02:30.198492 kernel: io scheduler mq-deadline registered Nov 8 00:02:30.198499 kernel: io scheduler kyber registered Nov 8 00:02:30.198507 kernel: io scheduler bfq registered Nov 8 00:02:30.198514 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:02:30.198522 kernel: thunder_xcv, ver 1.0 Nov 8 00:02:30.198530 kernel: thunder_bgx, ver 1.0 Nov 8 00:02:30.198537 kernel: nicpf, ver 1.0 Nov 8 00:02:30.198545 kernel: nicvf, ver 1.0 Nov 8 00:02:30.198683 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:02:30.198766 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:02:29 UTC (1762560149) Nov 8 00:02:30.198777 kernel: efifb: probing for efifb Nov 8 00:02:30.198787 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:02:30.198795 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:02:30.198802 kernel: efifb: scrolling: redraw Nov 8 00:02:30.198809 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:02:30.198817 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:02:30.198826 kernel: fb0: EFI VGA frame buffer device Nov 8 00:02:30.198835 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 8 00:02:30.198843 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:02:30.198851 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Nov 8 00:02:30.198858 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:02:30.198866 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:02:30.198874 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:02:30.198881 kernel: Segment Routing with IPv6 Nov 8 00:02:30.198888 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:02:30.198895 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:02:30.198904 kernel: Key type dns_resolver registered Nov 8 00:02:30.198913 kernel: registered taskstats version 1 Nov 8 00:02:30.198920 kernel: Loading compiled-in X.509 certificates Nov 8 00:02:30.198928 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:02:30.198935 kernel: Key type .fscrypt registered Nov 8 00:02:30.198942 kernel: Key type fscrypt-provisioning registered Nov 8 00:02:30.198951 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:02:30.198958 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:02:30.198966 kernel: ima: No architecture policies found Nov 8 00:02:30.198975 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:02:30.198982 kernel: clk: Disabling unused clocks Nov 8 00:02:30.198991 kernel: Freeing unused kernel memory: 39424K Nov 8 00:02:30.198999 kernel: Run /init as init process Nov 8 00:02:30.199007 kernel: with arguments: Nov 8 00:02:30.199014 kernel: /init Nov 8 00:02:30.199021 kernel: with environment: Nov 8 00:02:30.199029 kernel: HOME=/ Nov 8 00:02:30.199036 kernel: TERM=linux Nov 8 00:02:30.199046 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:02:30.199057 systemd[1]: Detected virtualization microsoft. Nov 8 00:02:30.199066 systemd[1]: Detected architecture arm64. Nov 8 00:02:30.199074 systemd[1]: Running in initrd. Nov 8 00:02:30.199082 systemd[1]: No hostname configured, using default hostname. Nov 8 00:02:30.199089 systemd[1]: Hostname set to . Nov 8 00:02:30.199098 systemd[1]: Initializing machine ID from random generator. Nov 8 00:02:30.199109 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:02:30.199117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:02:30.199125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:02:30.199134 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:02:30.199142 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:02:30.199151 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:02:30.199160 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:02:30.199169 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:02:30.199179 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:02:30.199187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:02:30.199197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:02:30.199205 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:02:30.199212 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:02:30.199221 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:02:30.199228 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:02:30.199237 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:02:30.199247 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:02:30.199255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:02:30.199263 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:02:30.199271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:02:30.199280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:02:30.199289 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:02:30.199297 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:02:30.199305 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:02:30.199314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:02:30.199323 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:02:30.199331 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:02:30.199339 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:02:30.199347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:02:30.199374 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:02:30.199397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:30.201446 systemd-journald[217]: Journal started Nov 8 00:02:30.201470 systemd-journald[217]: Runtime Journal (/run/log/journal/18958d42bea74a32a0937ab7e7d6d57f) is 8.0M, max 78.5M, 70.5M free. Nov 8 00:02:30.206263 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:02:30.227070 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:02:30.227093 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:02:30.231902 kernel: Bridge firewalling registered Nov 8 00:02:30.231719 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:02:30.234895 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:02:30.240822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:02:30.246766 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:02:30.253910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:02:30.263017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:30.286658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:02:30.294573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:02:30.312292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:02:30.326026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:02:30.337705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:30.349261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:02:30.358667 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:02:30.364335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:02:30.384676 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:02:30.394042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:02:30.406604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:02:30.421990 dracut-cmdline[251]: dracut-dracut-053 Nov 8 00:02:30.426718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:02:30.439236 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:02:30.467985 systemd-resolved[254]: Positive Trust Anchors: Nov 8 00:02:30.468004 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:02:30.468037 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:02:30.470841 systemd-resolved[254]: Defaulting to hostname 'linux'. Nov 8 00:02:30.471988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:02:30.479158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:02:30.582425 kernel: SCSI subsystem initialized Nov 8 00:02:30.590416 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:02:30.599430 kernel: iscsi: registered transport (tcp) Nov 8 00:02:30.616111 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:02:30.616188 kernel: QLogic iSCSI HBA Driver Nov 8 00:02:30.655695 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:02:30.670897 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:02:30.699299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:02:30.699360 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:02:30.705424 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:02:30.753431 kernel: raid6: neonx8 gen() 15791 MB/s Nov 8 00:02:30.772412 kernel: raid6: neonx4 gen() 15685 MB/s Nov 8 00:02:30.791410 kernel: raid6: neonx2 gen() 13224 MB/s Nov 8 00:02:30.811415 kernel: raid6: neonx1 gen() 10489 MB/s Nov 8 00:02:30.830410 kernel: raid6: int64x8 gen() 6977 MB/s Nov 8 00:02:30.849414 kernel: raid6: int64x4 gen() 7366 MB/s Nov 8 00:02:30.869410 kernel: raid6: int64x2 gen() 6146 MB/s Nov 8 00:02:30.891252 kernel: raid6: int64x1 gen() 5071 MB/s Nov 8 00:02:30.891263 kernel: raid6: using algorithm neonx8 gen() 15791 MB/s Nov 8 00:02:30.914765 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Nov 8 00:02:30.914776 kernel: raid6: using neon recovery algorithm Nov 8 00:02:30.924630 kernel: xor: measuring software checksum speed Nov 8 00:02:30.924676 kernel: 8regs : 19788 MB/sec Nov 8 00:02:30.927584 kernel: 32regs : 19381 MB/sec Nov 8 00:02:30.931284 kernel: arm64_neon : 27052 MB/sec Nov 8 00:02:30.934722 kernel: xor: using function: arm64_neon (27052 MB/sec) Nov 8 00:02:30.984419 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:02:30.995522 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:02:31.012570 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:02:31.031777 systemd-udevd[437]: Using default interface naming scheme 'v255'. Nov 8 00:02:31.035913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:02:31.051537 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:02:31.077254 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Nov 8 00:02:31.107469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:02:31.123657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:02:31.165009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:02:31.180611 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:02:31.200313 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:02:31.214642 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:02:31.228146 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:02:31.239745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:02:31.257504 kernel: hv_vmbus: Vmbus version:5.3 Nov 8 00:02:31.259711 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:02:31.290452 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:02:31.290515 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:02:31.290526 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:02:31.290536 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:02:31.302429 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:02:31.302486 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:02:31.309128 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:02:31.324892 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:02:31.336896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:02:31.351227 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:02:31.351249 kernel: PTP clock support registered Nov 8 00:02:31.351259 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:02:31.351268 kernel: scsi host0: storvsc_host_t Nov 8 00:02:31.351435 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:02:31.337074 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:31.388869 kernel: scsi host1: storvsc_host_t Nov 8 00:02:31.389063 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:02:31.389075 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 8 00:02:31.389178 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:02:31.389189 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:02:31.389198 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:02:31.389207 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:02:31.370486 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:02:31.377459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:02:31.392917 systemd-journald[217]: Time jumped backwards, rotating. Nov 8 00:02:31.377701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:31.376102 systemd-resolved[254]: Clock change detected. Flushing caches. Nov 8 00:02:31.376566 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:31.415854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:31.432726 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:02:31.432924 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:02:31.433764 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:02:31.435527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:02:31.445143 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: VF slot 1 added Nov 8 00:02:31.435646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:31.465609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:31.484015 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:02:31.484043 kernel: hv_pci 58e7c41a-6541-4a0e-97fc-29edea841802: PCI VMBus probing: Using version 0x10004 Nov 8 00:02:31.493253 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:02:31.493454 kernel: hv_pci 58e7c41a-6541-4a0e-97fc-29edea841802: PCI host bridge to bus 6541:00 Nov 8 00:02:31.493546 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:02:31.500543 kernel: pci_bus 6541:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 8 00:02:31.500707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:02:31.511006 kernel: pci_bus 6541:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:02:31.511183 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:02:31.522303 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:02:31.522510 kernel: pci 6541:00:02.0: [15b3:1018] type 00 class 0x020000 Nov 8 00:02:31.522903 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:02:31.530538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:31.557435 kernel: pci 6541:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:02:31.557472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:31.557481 kernel: pci 6541:00:02.0: enabling Extended Tags Nov 8 00:02:31.557495 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:02:31.561046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:02:31.584792 kernel: pci 6541:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6541:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Nov 8 00:02:31.584851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:02:31.595559 kernel: pci_bus 6541:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:02:31.601212 kernel: pci 6541:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:02:31.619062 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:31.649387 kernel: mlx5_core 6541:00:02.0: enabling device (0000 -> 0002) Nov 8 00:02:31.655785 kernel: mlx5_core 6541:00:02.0: firmware version: 16.30.5006 Nov 8 00:02:31.850422 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: VF registering: eth1 Nov 8 00:02:31.850625 kernel: mlx5_core 6541:00:02.0 eth1: joined to eth0 Nov 8 00:02:31.856061 kernel: mlx5_core 6541:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 8 00:02:31.866819 kernel: mlx5_core 6541:00:02.0 enP25921s1: renamed from eth1 Nov 8 00:02:32.102315 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:02:32.116308 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (500) Nov 8 00:02:32.121277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:02:32.148022 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:02:32.183764 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (493) Nov 8 00:02:32.197218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:02:32.202790 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:02:32.229930 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:02:32.254773 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:32.262765 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:33.275004 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:33.275059 disk-uuid[607]: The operation has completed successfully. Nov 8 00:02:33.349467 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:02:33.351636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:02:33.369880 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:02:33.380511 sh[720]: Success Nov 8 00:02:33.411152 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:02:33.795971 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:02:33.817877 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:02:33.825388 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:02:33.851486 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:02:33.851546 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:33.857351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:02:33.861536 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:02:33.864895 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:02:34.306547 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:02:34.311207 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:02:34.327002 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:02:34.333300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:02:34.365846 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:34.365896 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:34.369375 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:02:34.435886 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:02:34.438062 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:02:34.460783 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:34.466952 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:02:34.480424 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:02:34.484533 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:02:34.494962 systemd-networkd[901]: lo: Link UP Nov 8 00:02:34.494966 systemd-networkd[901]: lo: Gained carrier Nov 8 00:02:34.498737 systemd-networkd[901]: Enumeration completed Nov 8 00:02:34.499686 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:02:34.499690 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:02:34.514765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:02:34.524935 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:02:34.530097 systemd[1]: Reached target network.target - Network. Nov 8 00:02:34.588762 kernel: mlx5_core 6541:00:02.0 enP25921s1: Link up Nov 8 00:02:34.623764 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: Data path switched to VF: enP25921s1 Nov 8 00:02:34.624276 systemd-networkd[901]: enP25921s1: Link UP Nov 8 00:02:34.624359 systemd-networkd[901]: eth0: Link UP Nov 8 00:02:34.624474 systemd-networkd[901]: eth0: Gained carrier Nov 8 00:02:34.624483 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:02:34.642743 systemd-networkd[901]: enP25921s1: Gained carrier Nov 8 00:02:34.652787 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:02:35.504227 ignition[905]: Ignition 2.19.0 Nov 8 00:02:35.504243 ignition[905]: Stage: fetch-offline Nov 8 00:02:35.507793 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:02:35.504281 ignition[905]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.504290 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.504405 ignition[905]: parsed url from cmdline: "" Nov 8 00:02:35.504408 ignition[905]: no config URL provided Nov 8 00:02:35.504413 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:02:35.529096 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:02:35.504421 ignition[905]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:02:35.504427 ignition[905]: failed to fetch config: resource requires networking Nov 8 00:02:35.504643 ignition[905]: Ignition finished successfully Nov 8 00:02:35.546692 ignition[913]: Ignition 2.19.0 Nov 8 00:02:35.546699 ignition[913]: Stage: fetch Nov 8 00:02:35.546938 ignition[913]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.546948 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.547057 ignition[913]: parsed url from cmdline: "" Nov 8 00:02:35.547065 ignition[913]: no config URL provided Nov 8 00:02:35.547070 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:02:35.547077 ignition[913]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:02:35.547102 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:02:35.664132 ignition[913]: GET result: OK Nov 8 00:02:35.664194 ignition[913]: config has been read from IMDS userdata Nov 8 00:02:35.664236 ignition[913]: parsing config with SHA512: 778753def7e635110c2853d57d86d743d488b9960bbc63423866d2b042e05316006b2dfb153f4ed0211e419daedd32157a2f62aaab67df9c62da2ac8ff165b06 Nov 8 00:02:35.668301 unknown[913]: fetched base config from "system" Nov 8 00:02:35.668703 ignition[913]: fetch: fetch complete Nov 8 00:02:35.668308 unknown[913]: fetched base config from "system" Nov 8 00:02:35.668708 ignition[913]: fetch: fetch passed Nov 8 00:02:35.668312 unknown[913]: fetched user config from "azure" Nov 8 00:02:35.668769 ignition[913]: Ignition finished successfully Nov 8 00:02:35.670790 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:02:35.684984 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:02:35.709875 ignition[919]: Ignition 2.19.0 Nov 8 00:02:35.709884 ignition[919]: Stage: kargs Nov 8 00:02:35.714027 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:02:35.710053 ignition[919]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.710061 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.710986 ignition[919]: kargs: kargs passed Nov 8 00:02:35.711035 ignition[919]: Ignition finished successfully Nov 8 00:02:35.733923 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:02:35.749288 ignition[925]: Ignition 2.19.0 Nov 8 00:02:35.751933 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:02:35.749302 ignition[925]: Stage: disks Nov 8 00:02:35.758987 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:02:35.749556 ignition[925]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.767190 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:02:35.749566 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.774515 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:02:35.750498 ignition[925]: disks: disks passed Nov 8 00:02:35.783134 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:02:35.750542 ignition[925]: Ignition finished successfully Nov 8 00:02:35.790845 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:02:35.811003 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:02:35.877727 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:02:35.883563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:02:35.897986 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:02:35.951767 kernel: EXT4-fs (sda9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:02:35.953010 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:02:35.956769 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:02:36.002840 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:02:36.021828 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Nov 8 00:02:36.024900 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:02:36.039353 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:36.039373 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:36.038695 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:02:36.053373 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:02:36.053386 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:02:36.053422 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:02:36.059718 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:02:36.081989 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:02:36.095375 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:02:36.096968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:02:36.406905 systemd-networkd[901]: eth0: Gained IPv6LL Nov 8 00:02:36.752368 coreos-metadata[947]: Nov 08 00:02:36.752 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:02:36.760807 coreos-metadata[947]: Nov 08 00:02:36.760 INFO Fetch successful Nov 8 00:02:36.765141 coreos-metadata[947]: Nov 08 00:02:36.764 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:02:36.773868 coreos-metadata[947]: Nov 08 00:02:36.773 INFO Fetch successful Nov 8 00:02:36.788045 coreos-metadata[947]: Nov 08 00:02:36.788 INFO wrote hostname ci-4081.3.6-n-32f19bad4d to /sysroot/etc/hostname Nov 8 00:02:36.795449 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:02:37.012448 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:02:37.064026 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:02:37.083903 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:02:37.103564 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:02:38.215959 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:02:38.227984 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:02:38.238422 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:02:38.251004 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:02:38.260564 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:38.277867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:02:38.289777 ignition[1066]: INFO : Ignition 2.19.0 Nov 8 00:02:38.289777 ignition[1066]: INFO : Stage: mount Nov 8 00:02:38.289777 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:38.289777 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:38.310721 ignition[1066]: INFO : mount: mount passed Nov 8 00:02:38.310721 ignition[1066]: INFO : Ignition finished successfully Nov 8 00:02:38.296908 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:02:38.320854 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:02:38.333696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:02:38.361753 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Nov 8 00:02:38.361798 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:38.366486 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:38.369806 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:02:38.376782 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:02:38.378008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:02:38.403410 ignition[1092]: INFO : Ignition 2.19.0 Nov 8 00:02:38.403410 ignition[1092]: INFO : Stage: files Nov 8 00:02:38.403410 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:38.403410 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:38.403410 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:02:38.423729 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:02:38.429701 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:02:38.527365 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:02:38.533433 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:02:38.533433 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:02:38.527734 unknown[1092]: wrote ssh authorized keys file for user: core Nov 8 00:02:38.562923 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:02:38.571439 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 8 00:02:38.600859 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:02:38.684331 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 8 00:02:39.251861 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:02:39.513538 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:39.513538 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:02:39.559264 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: files passed Nov 8 00:02:39.568907 ignition[1092]: INFO : Ignition finished successfully Nov 8 00:02:39.569528 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:02:39.594509 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:02:39.605951 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:02:39.653536 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:02:39.653536 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:02:39.625165 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:02:39.681934 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:02:39.625302 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:02:39.654853 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:02:39.666470 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:02:39.688533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:02:39.721827 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:02:39.721998 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:02:39.731619 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:02:39.741133 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:02:39.749612 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:02:39.761994 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:02:39.774811 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:02:39.788115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:02:39.804164 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:02:39.810044 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:02:39.820124 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:02:39.829262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:02:39.829448 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:02:39.842031 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:02:39.851789 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:02:39.859828 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:02:39.868049 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:02:39.877582 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:02:39.887309 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:02:39.896551 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:02:39.905865 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:02:39.915427 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:02:39.923868 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:02:39.931252 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:02:39.931430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:02:39.942789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:02:39.951558 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:02:39.961083 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:02:39.961192 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:02:39.971488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:02:39.971656 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:02:39.985316 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:02:39.985484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:02:39.995007 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:02:39.995163 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:02:40.003381 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:02:40.003527 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:02:40.028846 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:02:40.045103 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:02:40.057038 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:02:40.077282 ignition[1145]: INFO : Ignition 2.19.0 Nov 8 00:02:40.077282 ignition[1145]: INFO : Stage: umount Nov 8 00:02:40.077282 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:40.077282 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:40.077282 ignition[1145]: INFO : umount: umount passed Nov 8 00:02:40.077282 ignition[1145]: INFO : Ignition finished successfully Nov 8 00:02:40.057249 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:02:40.066779 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:02:40.066945 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:02:40.078249 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:02:40.078355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:02:40.094736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:02:40.095085 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:02:40.106353 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:02:40.106442 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:02:40.114203 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:02:40.114254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:02:40.122465 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:02:40.122517 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:02:40.131693 systemd[1]: Stopped target network.target - Network. Nov 8 00:02:40.139765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:02:40.139856 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:02:40.147156 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:02:40.150951 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:02:40.157780 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:02:40.169148 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:02:40.177021 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:02:40.185841 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:02:40.185905 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:02:40.194566 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:02:40.194619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:02:40.202725 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:02:40.202792 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:02:40.212449 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:02:40.212502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:02:40.217472 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:02:40.226927 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:02:40.236468 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:02:40.242788 systemd-networkd[901]: eth0: DHCPv6 lease lost Nov 8 00:02:40.243496 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:02:40.243599 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:02:40.250227 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:02:40.250370 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:02:40.265146 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:02:40.421985 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: Data path switched from VF: enP25921s1 Nov 8 00:02:40.265209 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:02:40.288947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:02:40.296044 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:02:40.296111 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:02:40.306081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:02:40.306132 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:02:40.314365 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:02:40.314411 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:02:40.322874 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:02:40.322921 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:02:40.332418 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:02:40.382656 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:02:40.382843 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:02:40.393091 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:02:40.393160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:02:40.408823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:02:40.408857 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:02:40.418112 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:02:40.418167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:02:40.431148 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:02:40.431232 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:02:40.444493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:02:40.444556 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:40.473925 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:02:40.482896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:02:40.482970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:02:40.497192 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:02:40.497251 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:02:40.512662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:02:40.512722 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:02:40.523358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:02:40.523414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:40.528895 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:02:40.529006 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:02:40.537270 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:02:40.537352 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:02:40.542080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:02:40.542152 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:02:40.552027 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:02:40.560580 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:02:40.560671 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:02:40.583242 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:02:40.804728 systemd[1]: Switching root. Nov 8 00:02:40.902232 systemd-journald[217]: Journal stopped Nov 8 00:02:30.194922 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:02:30.194945 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:02:30.194953 kernel: KASLR enabled Nov 8 00:02:30.194960 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 8 00:02:30.194967 kernel: printk: bootconsole [pl11] enabled Nov 8 00:02:30.194973 kernel: efi: EFI v2.7 by EDK II Nov 8 00:02:30.194980 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Nov 8 00:02:30.194986 kernel: random: crng init done Nov 8 00:02:30.194992 kernel: ACPI: Early table checksum verification disabled Nov 8 00:02:30.194998 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Nov 8 00:02:30.195005 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195011 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195018 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 8 00:02:30.195024 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195032 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195038 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195045 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195053 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195059 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195065 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 8 00:02:30.195072 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:02:30.195078 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 8 00:02:30.195085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Nov 8 00:02:30.195091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Nov 8 00:02:30.195097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Nov 8 00:02:30.195104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Nov 8 00:02:30.195110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Nov 8 00:02:30.195117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Nov 8 00:02:30.195125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Nov 8 00:02:30.195131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Nov 8 00:02:30.195137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Nov 8 00:02:30.195144 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Nov 8 00:02:30.195150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Nov 8 00:02:30.195156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Nov 8 00:02:30.195163 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Nov 8 00:02:30.195169 kernel: Zone ranges: Nov 8 00:02:30.195175 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 8 00:02:30.195182 kernel: DMA32 empty Nov 8 00:02:30.195188 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:02:30.195195 kernel: Movable zone start for each node Nov 8 00:02:30.195205 kernel: Early memory node ranges Nov 8 00:02:30.195212 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 8 00:02:30.195219 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Nov 8 00:02:30.195226 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Nov 8 00:02:30.195232 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Nov 8 00:02:30.195240 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Nov 8 00:02:30.195247 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Nov 8 00:02:30.195254 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 8 00:02:30.195261 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 8 00:02:30.195268 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 8 00:02:30.195275 kernel: psci: probing for conduit method from ACPI. Nov 8 00:02:30.195281 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:02:30.195288 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:02:30.195295 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 8 00:02:30.195302 kernel: psci: SMC Calling Convention v1.4 Nov 8 00:02:30.195309 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 8 00:02:30.195315 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 8 00:02:30.195324 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:02:30.195331 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:02:30.195338 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:02:30.195344 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:02:30.195351 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:02:30.195358 kernel: CPU features: detected: Hardware dirty bit management Nov 8 00:02:30.195365 kernel: CPU features: detected: Spectre-BHB Nov 8 00:02:30.195372 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:02:30.195378 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:02:30.195385 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:02:30.195392 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Nov 8 00:02:30.195400 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:02:30.197447 kernel: alternatives: applying boot alternatives Nov 8 00:02:30.197458 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:02:30.197466 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:02:30.197473 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:02:30.197481 kernel: Fallback order for Node 0: 0 Nov 8 00:02:30.197488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Nov 8 00:02:30.197495 kernel: Policy zone: Normal Nov 8 00:02:30.197502 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:02:30.197509 kernel: software IO TLB: area num 2. Nov 8 00:02:30.197517 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Nov 8 00:02:30.197532 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Nov 8 00:02:30.197540 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:02:30.197548 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:02:30.197557 kernel: rcu: RCU event tracing is enabled. Nov 8 00:02:30.197565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:02:30.197574 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:02:30.197582 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:02:30.197590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:02:30.197598 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:02:30.197606 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:02:30.197614 kernel: GICv3: 960 SPIs implemented Nov 8 00:02:30.197624 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:02:30.197632 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:02:30.197640 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 8 00:02:30.197648 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 8 00:02:30.197656 kernel: ITS: No ITS available, not enabling LPIs Nov 8 00:02:30.197665 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:02:30.197673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:02:30.197680 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:02:30.197687 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:02:30.197694 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:02:30.197702 kernel: Console: colour dummy device 80x25 Nov 8 00:02:30.197711 kernel: printk: console [tty1] enabled Nov 8 00:02:30.197718 kernel: ACPI: Core revision 20230628 Nov 8 00:02:30.197727 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:02:30.197735 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:02:30.197743 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:02:30.197752 kernel: landlock: Up and running. Nov 8 00:02:30.197760 kernel: SELinux: Initializing. Nov 8 00:02:30.197768 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.197777 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.197787 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:02:30.197796 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:02:30.197805 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Nov 8 00:02:30.197812 kernel: Hyper-V: Host Build 10.0.26100.1382-1-0 Nov 8 00:02:30.197819 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:02:30.197826 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:02:30.197834 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:02:30.197841 kernel: Remapping and enabling EFI services. Nov 8 00:02:30.197855 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:02:30.197862 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:02:30.197869 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 8 00:02:30.197877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:02:30.197886 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:02:30.197893 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:02:30.197901 kernel: SMP: Total of 2 processors activated. Nov 8 00:02:30.197908 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:02:30.197916 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 8 00:02:30.197925 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:02:30.197933 kernel: CPU features: detected: CRC32 instructions Nov 8 00:02:30.197941 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:02:30.197948 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:02:30.197956 kernel: CPU features: detected: Privileged Access Never Nov 8 00:02:30.197963 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:02:30.197970 kernel: alternatives: applying system-wide alternatives Nov 8 00:02:30.197978 kernel: devtmpfs: initialized Nov 8 00:02:30.197985 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:02:30.197994 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:02:30.198002 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:02:30.198009 kernel: SMBIOS 3.1.0 present. Nov 8 00:02:30.198017 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 8 00:02:30.198025 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:02:30.198032 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:02:30.198040 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:02:30.198047 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:02:30.198055 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:02:30.198064 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Nov 8 00:02:30.198072 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:02:30.198079 kernel: cpuidle: using governor menu Nov 8 00:02:30.198086 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:02:30.198094 kernel: ASID allocator initialised with 32768 entries Nov 8 00:02:30.198101 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:02:30.198109 kernel: Serial: AMBA PL011 UART driver Nov 8 00:02:30.198116 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:02:30.198124 kernel: Modules: 0 pages in range for non-PLT usage Nov 8 00:02:30.198133 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:02:30.198140 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:02:30.198148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:02:30.198155 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:02:30.198163 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:02:30.198170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:02:30.198177 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:02:30.198185 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:02:30.198192 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:02:30.198201 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:02:30.198209 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:02:30.198216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:02:30.198223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:02:30.198231 kernel: ACPI: Interpreter enabled Nov 8 00:02:30.198238 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:02:30.198246 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:02:30.198253 kernel: printk: console [ttyAMA0] enabled Nov 8 00:02:30.198260 kernel: printk: bootconsole [pl11] disabled Nov 8 00:02:30.198270 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 8 00:02:30.198277 kernel: iommu: Default domain type: Translated Nov 8 00:02:30.198284 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:02:30.198292 kernel: efivars: Registered efivars operations Nov 8 00:02:30.198299 kernel: vgaarb: loaded Nov 8 00:02:30.198309 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:02:30.198318 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:02:30.198325 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:02:30.198332 kernel: pnp: PnP ACPI init Nov 8 00:02:30.198342 kernel: pnp: PnP ACPI: found 0 devices Nov 8 00:02:30.198350 kernel: NET: Registered PF_INET protocol family Nov 8 00:02:30.198359 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:02:30.198367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:02:30.198375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:02:30.198382 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:02:30.198390 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:02:30.198399 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:02:30.198412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.198422 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:02:30.198430 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:02:30.198438 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:02:30.198446 kernel: kvm [1]: HYP mode not available Nov 8 00:02:30.198453 kernel: Initialise system trusted keyrings Nov 8 00:02:30.198460 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:02:30.198468 kernel: Key type asymmetric registered Nov 8 00:02:30.198476 kernel: Asymmetric key parser 'x509' registered Nov 8 00:02:30.198483 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:02:30.198492 kernel: io scheduler mq-deadline registered Nov 8 00:02:30.198499 kernel: io scheduler kyber registered Nov 8 00:02:30.198507 kernel: io scheduler bfq registered Nov 8 00:02:30.198514 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:02:30.198522 kernel: thunder_xcv, ver 1.0 Nov 8 00:02:30.198530 kernel: thunder_bgx, ver 1.0 Nov 8 00:02:30.198537 kernel: nicpf, ver 1.0 Nov 8 00:02:30.198545 kernel: nicvf, ver 1.0 Nov 8 00:02:30.198683 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:02:30.198766 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:02:29 UTC (1762560149) Nov 8 00:02:30.198777 kernel: efifb: probing for efifb Nov 8 00:02:30.198787 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:02:30.198795 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:02:30.198802 kernel: efifb: scrolling: redraw Nov 8 00:02:30.198809 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:02:30.198817 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:02:30.198826 kernel: fb0: EFI VGA frame buffer device Nov 8 00:02:30.198835 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 8 00:02:30.198843 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:02:30.198851 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Nov 8 00:02:30.198858 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:02:30.198866 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:02:30.198874 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:02:30.198881 kernel: Segment Routing with IPv6 Nov 8 00:02:30.198888 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:02:30.198895 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:02:30.198904 kernel: Key type dns_resolver registered Nov 8 00:02:30.198913 kernel: registered taskstats version 1 Nov 8 00:02:30.198920 kernel: Loading compiled-in X.509 certificates Nov 8 00:02:30.198928 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:02:30.198935 kernel: Key type .fscrypt registered Nov 8 00:02:30.198942 kernel: Key type fscrypt-provisioning registered Nov 8 00:02:30.198951 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:02:30.198958 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:02:30.198966 kernel: ima: No architecture policies found Nov 8 00:02:30.198975 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:02:30.198982 kernel: clk: Disabling unused clocks Nov 8 00:02:30.198991 kernel: Freeing unused kernel memory: 39424K Nov 8 00:02:30.198999 kernel: Run /init as init process Nov 8 00:02:30.199007 kernel: with arguments: Nov 8 00:02:30.199014 kernel: /init Nov 8 00:02:30.199021 kernel: with environment: Nov 8 00:02:30.199029 kernel: HOME=/ Nov 8 00:02:30.199036 kernel: TERM=linux Nov 8 00:02:30.199046 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:02:30.199057 systemd[1]: Detected virtualization microsoft. Nov 8 00:02:30.199066 systemd[1]: Detected architecture arm64. Nov 8 00:02:30.199074 systemd[1]: Running in initrd. Nov 8 00:02:30.199082 systemd[1]: No hostname configured, using default hostname. Nov 8 00:02:30.199089 systemd[1]: Hostname set to . Nov 8 00:02:30.199098 systemd[1]: Initializing machine ID from random generator. Nov 8 00:02:30.199109 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:02:30.199117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:02:30.199125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:02:30.199134 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:02:30.199142 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:02:30.199151 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:02:30.199160 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:02:30.199169 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:02:30.199179 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:02:30.199187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:02:30.199197 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:02:30.199205 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:02:30.199212 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:02:30.199221 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:02:30.199228 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:02:30.199237 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:02:30.199247 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:02:30.199255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:02:30.199263 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:02:30.199271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:02:30.199280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:02:30.199289 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:02:30.199297 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:02:30.199305 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:02:30.199314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:02:30.199323 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:02:30.199331 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:02:30.199339 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:02:30.199347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:02:30.199374 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:02:30.199397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:30.201446 systemd-journald[217]: Journal started Nov 8 00:02:30.201470 systemd-journald[217]: Runtime Journal (/run/log/journal/18958d42bea74a32a0937ab7e7d6d57f) is 8.0M, max 78.5M, 70.5M free. Nov 8 00:02:30.206263 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:02:30.227070 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:02:30.227093 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:02:30.231902 kernel: Bridge firewalling registered Nov 8 00:02:30.231719 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:02:30.234895 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:02:30.240822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:02:30.246766 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:02:30.253910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:02:30.263017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:30.286658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:02:30.294573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:02:30.312292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:02:30.326026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:02:30.337705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:30.349261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:02:30.358667 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:02:30.364335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:02:30.384676 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:02:30.394042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:02:30.406604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:02:30.421990 dracut-cmdline[251]: dracut-dracut-053 Nov 8 00:02:30.426718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:02:30.439236 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:02:30.467985 systemd-resolved[254]: Positive Trust Anchors: Nov 8 00:02:30.468004 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:02:30.468037 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:02:30.470841 systemd-resolved[254]: Defaulting to hostname 'linux'. Nov 8 00:02:30.471988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:02:30.479158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:02:30.582425 kernel: SCSI subsystem initialized Nov 8 00:02:30.590416 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:02:30.599430 kernel: iscsi: registered transport (tcp) Nov 8 00:02:30.616111 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:02:30.616188 kernel: QLogic iSCSI HBA Driver Nov 8 00:02:30.655695 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:02:30.670897 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:02:30.699299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:02:30.699360 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:02:30.705424 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:02:30.753431 kernel: raid6: neonx8 gen() 15791 MB/s Nov 8 00:02:30.772412 kernel: raid6: neonx4 gen() 15685 MB/s Nov 8 00:02:30.791410 kernel: raid6: neonx2 gen() 13224 MB/s Nov 8 00:02:30.811415 kernel: raid6: neonx1 gen() 10489 MB/s Nov 8 00:02:30.830410 kernel: raid6: int64x8 gen() 6977 MB/s Nov 8 00:02:30.849414 kernel: raid6: int64x4 gen() 7366 MB/s Nov 8 00:02:30.869410 kernel: raid6: int64x2 gen() 6146 MB/s Nov 8 00:02:30.891252 kernel: raid6: int64x1 gen() 5071 MB/s Nov 8 00:02:30.891263 kernel: raid6: using algorithm neonx8 gen() 15791 MB/s Nov 8 00:02:30.914765 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Nov 8 00:02:30.914776 kernel: raid6: using neon recovery algorithm Nov 8 00:02:30.924630 kernel: xor: measuring software checksum speed Nov 8 00:02:30.924676 kernel: 8regs : 19788 MB/sec Nov 8 00:02:30.927584 kernel: 32regs : 19381 MB/sec Nov 8 00:02:30.931284 kernel: arm64_neon : 27052 MB/sec Nov 8 00:02:30.934722 kernel: xor: using function: arm64_neon (27052 MB/sec) Nov 8 00:02:30.984419 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:02:30.995522 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:02:31.012570 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:02:31.031777 systemd-udevd[437]: Using default interface naming scheme 'v255'. Nov 8 00:02:31.035913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:02:31.051537 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:02:31.077254 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Nov 8 00:02:31.107469 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:02:31.123657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:02:31.165009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:02:31.180611 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:02:31.200313 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:02:31.214642 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:02:31.228146 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:02:31.239745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:02:31.257504 kernel: hv_vmbus: Vmbus version:5.3 Nov 8 00:02:31.259711 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:02:31.290452 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:02:31.290515 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:02:31.290526 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:02:31.290536 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:02:31.302429 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:02:31.302486 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:02:31.309128 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:02:31.324892 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:02:31.336896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:02:31.351227 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:02:31.351249 kernel: PTP clock support registered Nov 8 00:02:31.351259 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:02:31.351268 kernel: scsi host0: storvsc_host_t Nov 8 00:02:31.351435 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:02:31.337074 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:31.388869 kernel: scsi host1: storvsc_host_t Nov 8 00:02:31.389063 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:02:31.389075 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 8 00:02:31.389178 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:02:31.389189 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:02:31.389198 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:02:31.389207 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:02:31.370486 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:02:31.377459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:02:31.392917 systemd-journald[217]: Time jumped backwards, rotating. Nov 8 00:02:31.377701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:31.376102 systemd-resolved[254]: Clock change detected. Flushing caches. Nov 8 00:02:31.376566 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:31.415854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:31.432726 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:02:31.432924 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:02:31.433764 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:02:31.435527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:02:31.445143 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: VF slot 1 added Nov 8 00:02:31.435646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:31.465609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:31.484015 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:02:31.484043 kernel: hv_pci 58e7c41a-6541-4a0e-97fc-29edea841802: PCI VMBus probing: Using version 0x10004 Nov 8 00:02:31.493253 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:02:31.493454 kernel: hv_pci 58e7c41a-6541-4a0e-97fc-29edea841802: PCI host bridge to bus 6541:00 Nov 8 00:02:31.493546 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:02:31.500543 kernel: pci_bus 6541:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 8 00:02:31.500707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:02:31.511006 kernel: pci_bus 6541:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:02:31.511183 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:02:31.522303 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:02:31.522510 kernel: pci 6541:00:02.0: [15b3:1018] type 00 class 0x020000 Nov 8 00:02:31.522903 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:02:31.530538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:31.557435 kernel: pci 6541:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:02:31.557472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:31.557481 kernel: pci 6541:00:02.0: enabling Extended Tags Nov 8 00:02:31.557495 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:02:31.561046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:02:31.584792 kernel: pci 6541:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6541:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Nov 8 00:02:31.584851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:02:31.595559 kernel: pci_bus 6541:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:02:31.601212 kernel: pci 6541:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 8 00:02:31.619062 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:31.649387 kernel: mlx5_core 6541:00:02.0: enabling device (0000 -> 0002) Nov 8 00:02:31.655785 kernel: mlx5_core 6541:00:02.0: firmware version: 16.30.5006 Nov 8 00:02:31.850422 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: VF registering: eth1 Nov 8 00:02:31.850625 kernel: mlx5_core 6541:00:02.0 eth1: joined to eth0 Nov 8 00:02:31.856061 kernel: mlx5_core 6541:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 8 00:02:31.866819 kernel: mlx5_core 6541:00:02.0 enP25921s1: renamed from eth1 Nov 8 00:02:32.102315 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:02:32.116308 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (500) Nov 8 00:02:32.121277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:02:32.148022 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:02:32.183764 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (493) Nov 8 00:02:32.197218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:02:32.202790 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:02:32.229930 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:02:32.254773 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:32.262765 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:33.275004 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:02:33.275059 disk-uuid[607]: The operation has completed successfully. Nov 8 00:02:33.349467 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:02:33.351636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:02:33.369880 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:02:33.380511 sh[720]: Success Nov 8 00:02:33.411152 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:02:33.795971 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:02:33.817877 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:02:33.825388 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:02:33.851486 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:02:33.851546 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:33.857351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:02:33.861536 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:02:33.864895 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:02:34.306547 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:02:34.311207 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:02:34.327002 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:02:34.333300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:02:34.365846 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:34.365896 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:34.369375 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:02:34.435886 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:02:34.438062 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:02:34.460783 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:34.466952 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:02:34.480424 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:02:34.484533 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:02:34.494962 systemd-networkd[901]: lo: Link UP Nov 8 00:02:34.494966 systemd-networkd[901]: lo: Gained carrier Nov 8 00:02:34.498737 systemd-networkd[901]: Enumeration completed Nov 8 00:02:34.499686 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:02:34.499690 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:02:34.514765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:02:34.524935 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:02:34.530097 systemd[1]: Reached target network.target - Network. Nov 8 00:02:34.588762 kernel: mlx5_core 6541:00:02.0 enP25921s1: Link up Nov 8 00:02:34.623764 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: Data path switched to VF: enP25921s1 Nov 8 00:02:34.624276 systemd-networkd[901]: enP25921s1: Link UP Nov 8 00:02:34.624359 systemd-networkd[901]: eth0: Link UP Nov 8 00:02:34.624474 systemd-networkd[901]: eth0: Gained carrier Nov 8 00:02:34.624483 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:02:34.642743 systemd-networkd[901]: enP25921s1: Gained carrier Nov 8 00:02:34.652787 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:02:35.504227 ignition[905]: Ignition 2.19.0 Nov 8 00:02:35.504243 ignition[905]: Stage: fetch-offline Nov 8 00:02:35.507793 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:02:35.504281 ignition[905]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.504290 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.504405 ignition[905]: parsed url from cmdline: "" Nov 8 00:02:35.504408 ignition[905]: no config URL provided Nov 8 00:02:35.504413 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:02:35.529096 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:02:35.504421 ignition[905]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:02:35.504427 ignition[905]: failed to fetch config: resource requires networking Nov 8 00:02:35.504643 ignition[905]: Ignition finished successfully Nov 8 00:02:35.546692 ignition[913]: Ignition 2.19.0 Nov 8 00:02:35.546699 ignition[913]: Stage: fetch Nov 8 00:02:35.546938 ignition[913]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.546948 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.547057 ignition[913]: parsed url from cmdline: "" Nov 8 00:02:35.547065 ignition[913]: no config URL provided Nov 8 00:02:35.547070 ignition[913]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:02:35.547077 ignition[913]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:02:35.547102 ignition[913]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:02:35.664132 ignition[913]: GET result: OK Nov 8 00:02:35.664194 ignition[913]: config has been read from IMDS userdata Nov 8 00:02:35.664236 ignition[913]: parsing config with SHA512: 778753def7e635110c2853d57d86d743d488b9960bbc63423866d2b042e05316006b2dfb153f4ed0211e419daedd32157a2f62aaab67df9c62da2ac8ff165b06 Nov 8 00:02:35.668301 unknown[913]: fetched base config from "system" Nov 8 00:02:35.668703 ignition[913]: fetch: fetch complete Nov 8 00:02:35.668308 unknown[913]: fetched base config from "system" Nov 8 00:02:35.668708 ignition[913]: fetch: fetch passed Nov 8 00:02:35.668312 unknown[913]: fetched user config from "azure" Nov 8 00:02:35.668769 ignition[913]: Ignition finished successfully Nov 8 00:02:35.670790 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:02:35.684984 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:02:35.709875 ignition[919]: Ignition 2.19.0 Nov 8 00:02:35.709884 ignition[919]: Stage: kargs Nov 8 00:02:35.714027 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:02:35.710053 ignition[919]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.710061 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.710986 ignition[919]: kargs: kargs passed Nov 8 00:02:35.711035 ignition[919]: Ignition finished successfully Nov 8 00:02:35.733923 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:02:35.749288 ignition[925]: Ignition 2.19.0 Nov 8 00:02:35.751933 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:02:35.749302 ignition[925]: Stage: disks Nov 8 00:02:35.758987 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:02:35.749556 ignition[925]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:35.767190 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:02:35.749566 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:35.774515 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:02:35.750498 ignition[925]: disks: disks passed Nov 8 00:02:35.783134 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:02:35.750542 ignition[925]: Ignition finished successfully Nov 8 00:02:35.790845 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:02:35.811003 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:02:35.877727 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:02:35.883563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:02:35.897986 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:02:35.951767 kernel: EXT4-fs (sda9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:02:35.953010 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:02:35.956769 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:02:36.002840 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:02:36.021828 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Nov 8 00:02:36.024900 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:02:36.039353 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:36.039373 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:36.038695 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:02:36.053373 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:02:36.053386 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:02:36.053422 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:02:36.059718 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:02:36.081989 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:02:36.095375 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:02:36.096968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:02:36.406905 systemd-networkd[901]: eth0: Gained IPv6LL Nov 8 00:02:36.752368 coreos-metadata[947]: Nov 08 00:02:36.752 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:02:36.760807 coreos-metadata[947]: Nov 08 00:02:36.760 INFO Fetch successful Nov 8 00:02:36.765141 coreos-metadata[947]: Nov 08 00:02:36.764 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:02:36.773868 coreos-metadata[947]: Nov 08 00:02:36.773 INFO Fetch successful Nov 8 00:02:36.788045 coreos-metadata[947]: Nov 08 00:02:36.788 INFO wrote hostname ci-4081.3.6-n-32f19bad4d to /sysroot/etc/hostname Nov 8 00:02:36.795449 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:02:37.012448 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:02:37.064026 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:02:37.083903 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:02:37.103564 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:02:38.215959 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:02:38.227984 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:02:38.238422 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:02:38.251004 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:02:38.260564 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:38.277867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:02:38.289777 ignition[1066]: INFO : Ignition 2.19.0 Nov 8 00:02:38.289777 ignition[1066]: INFO : Stage: mount Nov 8 00:02:38.289777 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:38.289777 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:38.310721 ignition[1066]: INFO : mount: mount passed Nov 8 00:02:38.310721 ignition[1066]: INFO : Ignition finished successfully Nov 8 00:02:38.296908 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:02:38.320854 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:02:38.333696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:02:38.361753 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Nov 8 00:02:38.361798 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:02:38.366486 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:02:38.369806 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:02:38.376782 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:02:38.378008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:02:38.403410 ignition[1092]: INFO : Ignition 2.19.0 Nov 8 00:02:38.403410 ignition[1092]: INFO : Stage: files Nov 8 00:02:38.403410 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:38.403410 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:38.403410 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:02:38.423729 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:02:38.429701 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:02:38.527365 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:02:38.533433 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:02:38.533433 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:02:38.527734 unknown[1092]: wrote ssh authorized keys file for user: core Nov 8 00:02:38.562923 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:02:38.571439 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 8 00:02:38.600859 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:02:38.684331 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:38.692669 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 8 00:02:39.251861 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:02:39.513538 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 8 00:02:39.513538 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:02:39.559264 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:02:39.568907 ignition[1092]: INFO : files: files passed Nov 8 00:02:39.568907 ignition[1092]: INFO : Ignition finished successfully Nov 8 00:02:39.569528 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:02:39.594509 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:02:39.605951 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:02:39.653536 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:02:39.653536 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:02:39.625165 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:02:39.681934 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:02:39.625302 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:02:39.654853 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:02:39.666470 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:02:39.688533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:02:39.721827 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:02:39.721998 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:02:39.731619 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:02:39.741133 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:02:39.749612 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:02:39.761994 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:02:39.774811 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:02:39.788115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:02:39.804164 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:02:39.810044 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:02:39.820124 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:02:39.829262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:02:39.829448 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:02:39.842031 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:02:39.851789 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:02:39.859828 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:02:39.868049 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:02:39.877582 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:02:39.887309 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:02:39.896551 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:02:39.905865 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:02:39.915427 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:02:39.923868 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:02:39.931252 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:02:39.931430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:02:39.942789 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:02:39.951558 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:02:39.961083 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:02:39.961192 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:02:39.971488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:02:39.971656 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:02:39.985316 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:02:39.985484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:02:39.995007 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:02:39.995163 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:02:40.003381 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:02:40.003527 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:02:40.028846 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:02:40.045103 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:02:40.057038 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:02:40.077282 ignition[1145]: INFO : Ignition 2.19.0 Nov 8 00:02:40.077282 ignition[1145]: INFO : Stage: umount Nov 8 00:02:40.077282 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:02:40.077282 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:02:40.077282 ignition[1145]: INFO : umount: umount passed Nov 8 00:02:40.077282 ignition[1145]: INFO : Ignition finished successfully Nov 8 00:02:40.057249 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:02:40.066779 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:02:40.066945 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:02:40.078249 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:02:40.078355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:02:40.094736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:02:40.095085 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:02:40.106353 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:02:40.106442 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:02:40.114203 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:02:40.114254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:02:40.122465 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:02:40.122517 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:02:40.131693 systemd[1]: Stopped target network.target - Network. Nov 8 00:02:40.139765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:02:40.139856 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:02:40.147156 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:02:40.150951 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:02:40.157780 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:02:40.169148 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:02:40.177021 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:02:40.185841 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:02:40.185905 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:02:40.194566 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:02:40.194619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:02:40.202725 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:02:40.202792 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:02:40.212449 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:02:40.212502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:02:40.217472 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:02:40.226927 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:02:40.236468 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:02:40.242788 systemd-networkd[901]: eth0: DHCPv6 lease lost Nov 8 00:02:40.243496 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:02:40.243599 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:02:40.250227 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:02:40.250370 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:02:40.265146 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:02:40.421985 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: Data path switched from VF: enP25921s1 Nov 8 00:02:40.265209 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:02:40.288947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:02:40.296044 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:02:40.296111 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:02:40.306081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:02:40.306132 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:02:40.314365 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:02:40.314411 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:02:40.322874 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:02:40.322921 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:02:40.332418 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:02:40.382656 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:02:40.382843 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:02:40.393091 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:02:40.393160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:02:40.408823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:02:40.408857 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:02:40.418112 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:02:40.418167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:02:40.431148 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:02:40.431232 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:02:40.444493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:02:40.444556 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:02:40.473925 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:02:40.482896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:02:40.482970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:02:40.497192 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:02:40.497251 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:02:40.512662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:02:40.512722 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:02:40.523358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:02:40.523414 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:02:40.528895 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:02:40.529006 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:02:40.537270 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:02:40.537352 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:02:40.542080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:02:40.542152 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:02:40.552027 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:02:40.560580 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:02:40.560671 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:02:40.583242 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:02:40.804728 systemd[1]: Switching root. Nov 8 00:02:40.902232 systemd-journald[217]: Journal stopped Nov 8 00:02:48.532209 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Nov 8 00:02:48.532237 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:02:48.532249 kernel: SELinux: policy capability open_perms=1 Nov 8 00:02:48.532262 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:02:48.532270 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:02:48.532278 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:02:48.532287 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:02:48.532296 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:02:48.532304 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:02:48.532312 kernel: audit: type=1403 audit(1762560162.356:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:02:48.532324 systemd[1]: Successfully loaded SELinux policy in 154.297ms. Nov 8 00:02:48.532334 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.245ms. Nov 8 00:02:48.532344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:02:48.532353 systemd[1]: Detected virtualization microsoft. Nov 8 00:02:48.532363 systemd[1]: Detected architecture arm64. Nov 8 00:02:48.532373 systemd[1]: Detected first boot. Nov 8 00:02:48.532383 systemd[1]: Hostname set to . Nov 8 00:02:48.532392 systemd[1]: Initializing machine ID from random generator. Nov 8 00:02:48.532401 zram_generator::config[1186]: No configuration found. Nov 8 00:02:48.532411 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:02:48.532420 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:02:48.532431 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:02:48.532440 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:02:48.532450 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:02:48.532460 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:02:48.532470 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:02:48.532479 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:02:48.532488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:02:48.532500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:02:48.532509 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:02:48.532518 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:02:48.532528 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:02:48.532538 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:02:48.532547 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:02:48.532556 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:02:48.532566 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:02:48.532575 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:02:48.532586 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 8 00:02:48.532596 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:02:48.532605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:02:48.532617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:02:48.532626 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:02:48.532636 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:02:48.532646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:02:48.532657 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:02:48.532668 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:02:48.532677 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:02:48.532686 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:02:48.532696 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:02:48.532705 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:02:48.532715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:02:48.532726 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:02:48.532736 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:02:48.532746 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:02:48.532768 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:02:48.532779 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:02:48.532789 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:02:48.532800 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:02:48.532810 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:02:48.532820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:02:48.532831 systemd[1]: Reached target machines.target - Containers. Nov 8 00:02:48.532840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:02:48.532850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:02:48.532860 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:02:48.532870 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:02:48.532881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:02:48.532892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:02:48.532901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:02:48.532911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:02:48.532920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:02:48.532931 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:02:48.532941 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:02:48.532951 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:02:48.532960 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:02:48.532971 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:02:48.532981 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:02:48.532990 kernel: fuse: init (API version 7.39) Nov 8 00:02:48.532999 kernel: loop: module loaded Nov 8 00:02:48.533008 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:02:48.533018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:02:48.533028 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:02:48.533058 systemd-journald[1275]: Collecting audit messages is disabled. Nov 8 00:02:48.533080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:02:48.533090 systemd-journald[1275]: Journal started Nov 8 00:02:48.533111 systemd-journald[1275]: Runtime Journal (/run/log/journal/aac3ab35ad6a41379e6afcb2a6330121) is 8.0M, max 78.5M, 70.5M free. Nov 8 00:02:47.548542 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:02:47.721954 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:02:47.722307 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:02:47.722610 systemd[1]: systemd-journald.service: Consumed 2.387s CPU time. Nov 8 00:02:48.550877 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:02:48.550944 systemd[1]: Stopped verity-setup.service. Nov 8 00:02:48.564018 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:02:48.564788 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:02:48.569549 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:02:48.574330 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:02:48.578601 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:02:48.584104 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:02:48.589145 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:02:48.593860 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:02:48.599444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:02:48.610770 kernel: ACPI: bus type drm_connector registered Nov 8 00:02:48.609350 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:02:48.609486 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:02:48.615142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:02:48.616939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:02:48.622357 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:02:48.622501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:02:48.627256 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:02:48.627377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:02:48.633262 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:02:48.633389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:02:48.638380 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:02:48.638499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:02:48.643595 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:02:48.650351 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:02:48.656277 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:02:48.661726 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:02:48.675561 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:02:48.684833 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:02:48.692920 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:02:48.698154 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:02:48.698264 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:02:48.704134 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:02:48.710837 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:02:48.716949 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:02:48.721559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:02:48.736990 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:02:48.743194 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:02:48.748403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:02:48.749680 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:02:48.754654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:02:48.756076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:02:48.763966 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:02:48.771962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:02:48.780884 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:02:48.791460 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:02:48.796726 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:02:48.802460 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:02:48.814435 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:02:48.824408 systemd-journald[1275]: Time spent on flushing to /var/log/journal/aac3ab35ad6a41379e6afcb2a6330121 is 41.701ms for 899 entries. Nov 8 00:02:48.824408 systemd-journald[1275]: System Journal (/var/log/journal/aac3ab35ad6a41379e6afcb2a6330121) is 11.8M, max 2.6G, 2.6G free. Nov 8 00:02:48.920352 systemd-journald[1275]: Received client request to flush runtime journal. Nov 8 00:02:48.920398 kernel: loop0: detected capacity change from 0 to 211168 Nov 8 00:02:48.920432 systemd-journald[1275]: /var/log/journal/aac3ab35ad6a41379e6afcb2a6330121/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Nov 8 00:02:48.920457 systemd-journald[1275]: Rotating system journal. Nov 8 00:02:48.920479 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:02:48.824132 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:02:48.835212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:02:48.851986 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:02:48.922233 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:02:48.942520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:02:48.957255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:02:48.958022 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:02:48.989782 kernel: loop1: detected capacity change from 0 to 114432 Nov 8 00:02:49.097057 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Nov 8 00:02:49.097076 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Nov 8 00:02:49.101990 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:02:49.112948 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:02:49.507938 kernel: loop2: detected capacity change from 0 to 31320 Nov 8 00:02:49.687182 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:02:49.697680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:02:49.715630 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Nov 8 00:02:49.715649 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Nov 8 00:02:49.719283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:02:49.815690 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:02:49.825975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:02:49.852939 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Nov 8 00:02:50.140770 kernel: loop3: detected capacity change from 0 to 114328 Nov 8 00:02:50.622778 kernel: loop4: detected capacity change from 0 to 211168 Nov 8 00:02:50.638778 kernel: loop5: detected capacity change from 0 to 114432 Nov 8 00:02:50.650930 kernel: loop6: detected capacity change from 0 to 31320 Nov 8 00:02:50.661795 kernel: loop7: detected capacity change from 0 to 114328 Nov 8 00:02:50.668796 (sd-merge)[1352]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 8 00:02:50.669251 (sd-merge)[1352]: Merged extensions into '/usr'. Nov 8 00:02:50.673265 systemd[1]: Reloading requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:02:50.673439 systemd[1]: Reloading... Nov 8 00:02:50.757837 zram_generator::config[1383]: No configuration found. Nov 8 00:02:50.885595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:02:50.942858 systemd[1]: Reloading finished in 268 ms. Nov 8 00:02:50.973863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:02:50.980866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:02:50.998858 systemd[1]: Starting ensure-sysext.service... Nov 8 00:02:51.005944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:02:51.025640 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:02:51.051741 systemd[1]: Reloading requested from client PID 1443 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:02:51.051828 systemd[1]: Reloading... Nov 8 00:02:51.119343 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:02:51.122127 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:02:51.123667 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:02:51.126112 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Nov 8 00:02:51.129896 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Nov 8 00:02:51.155368 zram_generator::config[1484]: No configuration found. Nov 8 00:02:51.192527 systemd-tmpfiles[1452]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:02:51.192538 systemd-tmpfiles[1452]: Skipping /boot Nov 8 00:02:51.202409 systemd-tmpfiles[1452]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:02:51.202541 systemd-tmpfiles[1452]: Skipping /boot Nov 8 00:02:51.259844 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:02:51.290770 kernel: hv_vmbus: registering driver hv_balloon Nov 8 00:02:51.290864 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 8 00:02:51.298513 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 8 00:02:51.338705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:02:51.344543 kernel: hv_vmbus: registering driver hyperv_fb Nov 8 00:02:51.344632 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 8 00:02:51.351457 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 8 00:02:51.360767 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:02:51.367355 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:02:51.371768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:02:51.425540 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 8 00:02:51.426013 systemd[1]: Reloading finished in 373 ms. Nov 8 00:02:51.445668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:02:51.466989 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1434) Nov 8 00:02:51.530797 systemd[1]: Finished ensure-sysext.service. Nov 8 00:02:51.540507 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:02:51.553922 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:02:51.612095 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:02:51.617627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:02:51.618839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:02:51.625967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:02:51.635987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:02:51.645107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:02:51.652524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:02:51.658990 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:02:51.667970 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:02:51.682904 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:02:51.687684 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:02:51.702413 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:02:51.709377 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:02:51.715964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:02:51.723942 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:02:51.730195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:02:51.731927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:02:51.738022 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:02:51.738172 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:02:51.743207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:02:51.743372 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:02:51.749532 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:02:51.749683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:02:51.769038 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:02:51.773932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:02:51.774090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:02:51.777800 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:02:51.918568 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:02:51.924384 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:02:52.016314 lvm[1627]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:02:52.309586 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:02:52.315441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:02:52.324908 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:02:52.334246 lvm[1642]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:02:52.373922 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:02:52.386272 augenrules[1640]: No rules Nov 8 00:02:52.387737 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:02:52.434358 systemd-resolved[1611]: Positive Trust Anchors: Nov 8 00:02:52.434391 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:02:52.434429 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:02:52.782511 systemd-resolved[1611]: Using system hostname 'ci-4081.3.6-n-32f19bad4d'. Nov 8 00:02:52.783899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:02:52.789105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:02:52.814056 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:02:53.411151 systemd-networkd[1447]: lo: Link UP Nov 8 00:02:53.411163 systemd-networkd[1447]: lo: Gained carrier Nov 8 00:02:53.413306 systemd-networkd[1447]: Enumeration completed Nov 8 00:02:53.413829 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:02:53.413917 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:02:53.413920 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:02:53.419512 systemd[1]: Reached target network.target - Network. Nov 8 00:02:53.428974 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:02:53.474773 kernel: mlx5_core 6541:00:02.0 enP25921s1: Link up Nov 8 00:02:53.502770 kernel: hv_netvsc 000d3afb-8f51-000d-3afb-8f51000d3afb eth0: Data path switched to VF: enP25921s1 Nov 8 00:02:53.503189 systemd-networkd[1447]: enP25921s1: Link UP Nov 8 00:02:53.503322 systemd-networkd[1447]: eth0: Link UP Nov 8 00:02:53.503325 systemd-networkd[1447]: eth0: Gained carrier Nov 8 00:02:53.503340 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:02:53.509187 systemd-networkd[1447]: enP25921s1: Gained carrier Nov 8 00:02:53.518794 systemd-networkd[1447]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:02:55.350937 systemd-networkd[1447]: eth0: Gained IPv6LL Nov 8 00:02:55.354577 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:02:55.360410 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:02:57.972261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:03:06.116430 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:03:06.122474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:03:13.893794 ldconfig[1315]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:03:14.160849 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:03:14.169979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:03:14.197904 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:03:14.203169 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:03:14.208004 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:03:14.213282 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:03:14.219066 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:03:14.223621 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:03:14.229008 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:03:14.234292 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:03:14.234330 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:03:14.238168 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:03:14.463670 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:03:14.470087 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:03:14.503781 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:03:14.508984 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:03:14.514010 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:03:14.518150 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:03:14.522104 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:03:14.522224 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:03:14.546875 systemd[1]: Starting chronyd.service - NTP client/server... Nov 8 00:03:14.552900 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:03:14.569382 (chronyd)[1663]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 8 00:03:14.570901 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:03:14.577348 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:03:14.582915 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:03:14.588440 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:03:14.593120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:03:14.593268 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 8 00:03:14.594572 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 8 00:03:14.599134 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 8 00:03:14.601219 KVP[1671]: KVP starting; pid is:1671 Nov 8 00:03:14.601575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:03:14.608952 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:03:14.614992 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:03:14.622885 KVP[1671]: KVP LIC Version: 3.1 Nov 8 00:03:14.623028 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:03:14.626903 kernel: hv_utils: KVP IC version 4.0 Nov 8 00:03:14.628937 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:03:14.635984 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:03:14.644968 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:03:14.649603 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:03:14.650112 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:03:14.652063 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:03:14.660887 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:03:14.825616 jq[1680]: true Nov 8 00:03:14.826428 jq[1669]: false Nov 8 00:03:14.827788 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:03:14.827987 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:03:14.835172 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:03:14.835822 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:03:14.839328 jq[1691]: true Nov 8 00:03:14.861139 chronyd[1700]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 8 00:03:14.874363 systemd-logind[1678]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:03:14.875737 systemd-logind[1678]: New seat seat0. Nov 8 00:03:14.876313 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:03:15.121302 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:03:15.121890 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:03:15.122064 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:03:15.344893 tar[1683]: linux-arm64/LICENSE Nov 8 00:03:15.345906 tar[1683]: linux-arm64/helm Nov 8 00:03:15.389931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:03:15.398433 update_engine[1679]: I20251108 00:03:15.398327 1679 main.cc:92] Flatcar Update Engine starting Nov 8 00:03:15.405358 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:03:15.408211 chronyd[1700]: Timezone right/UTC failed leap second check, ignoring Nov 8 00:03:15.408915 chronyd[1700]: Loaded seccomp filter (level 2) Nov 8 00:03:15.413091 systemd[1]: Started chronyd.service - NTP client/server. Nov 8 00:03:15.425414 extend-filesystems[1670]: Found loop4 Nov 8 00:03:15.425414 extend-filesystems[1670]: Found loop5 Nov 8 00:03:15.425414 extend-filesystems[1670]: Found loop6 Nov 8 00:03:15.425414 extend-filesystems[1670]: Found loop7 Nov 8 00:03:15.425414 extend-filesystems[1670]: Found sda Nov 8 00:03:15.455716 extend-filesystems[1670]: Found sda1 Nov 8 00:03:15.455716 extend-filesystems[1670]: Found sda2 Nov 8 00:03:15.455716 extend-filesystems[1670]: Found sda3 Nov 8 00:03:15.455716 extend-filesystems[1670]: Found usr Nov 8 00:03:15.455716 extend-filesystems[1670]: Found sda4 Nov 8 00:03:15.455716 extend-filesystems[1670]: Found sda6 Nov 8 00:03:15.455716 extend-filesystems[1670]: Found sda7 Nov 8 00:03:15.455716 extend-filesystems[1670]: Found sda9 Nov 8 00:03:15.455716 extend-filesystems[1670]: Checking size of /dev/sda9 Nov 8 00:03:15.435586 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:03:15.781609 tar[1683]: linux-arm64/README.md Nov 8 00:03:15.800793 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:03:15.940734 kubelet[1736]: E1108 00:03:15.940677 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:03:16.157819 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1717) Nov 8 00:03:15.943955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:03:16.158018 extend-filesystems[1670]: Old size kept for /dev/sda9 Nov 8 00:03:16.158018 extend-filesystems[1670]: Found sr0 Nov 8 00:03:15.944083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:03:16.022092 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:03:16.022272 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:03:17.296203 bash[1728]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:03:17.298921 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:03:17.306625 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:03:17.717509 dbus-daemon[1666]: [system] SELinux support is enabled Nov 8 00:03:17.718036 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:03:17.727776 update_engine[1679]: I20251108 00:03:17.725241 1679 update_check_scheduler.cc:74] Next update check in 2m49s Nov 8 00:03:17.728591 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:03:17.728743 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:03:17.735636 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:03:17.735657 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:03:17.744005 dbus-daemon[1666]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:03:17.744394 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:03:17.758134 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:03:17.977070 coreos-metadata[1665]: Nov 08 00:03:17.976 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:03:17.980501 coreos-metadata[1665]: Nov 08 00:03:17.980 INFO Fetch successful Nov 8 00:03:17.980501 coreos-metadata[1665]: Nov 08 00:03:17.980 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 8 00:03:17.985090 coreos-metadata[1665]: Nov 08 00:03:17.985 INFO Fetch successful Nov 8 00:03:17.985590 coreos-metadata[1665]: Nov 08 00:03:17.985 INFO Fetching http://168.63.129.16/machine/27f3523c-81f9-478c-84e9-e7b321f8fa48/0ea28c3d%2D7c5a%2D44d0%2Db8bf%2D5973984a7cd4.%5Fci%2D4081.3.6%2Dn%2D32f19bad4d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 8 00:03:17.987639 coreos-metadata[1665]: Nov 08 00:03:17.987 INFO Fetch successful Nov 8 00:03:17.987790 coreos-metadata[1665]: Nov 08 00:03:17.987 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:03:17.998884 coreos-metadata[1665]: Nov 08 00:03:17.998 INFO Fetch successful Nov 8 00:03:18.023815 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:03:18.029127 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:03:18.615287 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:03:18.624919 locksmithd[1791]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:03:18.637526 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:03:18.648022 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:03:18.656190 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 8 00:03:18.667616 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:03:18.677191 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:03:18.696430 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:03:18.708117 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 8 00:03:18.755380 containerd[1723]: time="2025-11-08T00:03:18.755255040Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:03:18.779397 containerd[1723]: time="2025-11-08T00:03:18.779306680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:03:18.780807 containerd[1723]: time="2025-11-08T00:03:18.780767720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:03:18.780807 containerd[1723]: time="2025-11-08T00:03:18.780805560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:03:18.780867 containerd[1723]: time="2025-11-08T00:03:18.780821720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:03:18.781022 containerd[1723]: time="2025-11-08T00:03:18.780997440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:03:18.781052 containerd[1723]: time="2025-11-08T00:03:18.781024200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781110 containerd[1723]: time="2025-11-08T00:03:18.781090760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781131 containerd[1723]: time="2025-11-08T00:03:18.781109800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781294 containerd[1723]: time="2025-11-08T00:03:18.781271120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781316 containerd[1723]: time="2025-11-08T00:03:18.781292800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781316 containerd[1723]: time="2025-11-08T00:03:18.781306160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781352 containerd[1723]: time="2025-11-08T00:03:18.781316480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781408 containerd[1723]: time="2025-11-08T00:03:18.781390600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781601 containerd[1723]: time="2025-11-08T00:03:18.781580520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781703 containerd[1723]: time="2025-11-08T00:03:18.781683800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:03:18.781726 containerd[1723]: time="2025-11-08T00:03:18.781703640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:03:18.781814 containerd[1723]: time="2025-11-08T00:03:18.781796000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:03:18.781860 containerd[1723]: time="2025-11-08T00:03:18.781845280Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:03:18.813528 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:03:18.828091 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:03:18.833932 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 8 00:03:18.846091 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:03:19.000834 containerd[1723]: time="2025-11-08T00:03:19.000769720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:03:19.000834 containerd[1723]: time="2025-11-08T00:03:19.000836640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:03:19.001056 containerd[1723]: time="2025-11-08T00:03:19.000853600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:03:19.001056 containerd[1723]: time="2025-11-08T00:03:19.000871680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:03:19.001056 containerd[1723]: time="2025-11-08T00:03:19.000885840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:03:19.001113 containerd[1723]: time="2025-11-08T00:03:19.001056920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:03:19.001305 containerd[1723]: time="2025-11-08T00:03:19.001285040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:03:19.001418 containerd[1723]: time="2025-11-08T00:03:19.001397600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:03:19.001457 containerd[1723]: time="2025-11-08T00:03:19.001419560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:03:19.001457 containerd[1723]: time="2025-11-08T00:03:19.001435360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:03:19.001457 containerd[1723]: time="2025-11-08T00:03:19.001449000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001515 containerd[1723]: time="2025-11-08T00:03:19.001461760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001515 containerd[1723]: time="2025-11-08T00:03:19.001476320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001515 containerd[1723]: time="2025-11-08T00:03:19.001489760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001515 containerd[1723]: time="2025-11-08T00:03:19.001504000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001585 containerd[1723]: time="2025-11-08T00:03:19.001516640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001585 containerd[1723]: time="2025-11-08T00:03:19.001531120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001585 containerd[1723]: time="2025-11-08T00:03:19.001544280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:03:19.001585 containerd[1723]: time="2025-11-08T00:03:19.001564160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001585 containerd[1723]: time="2025-11-08T00:03:19.001577920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001590960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001605120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001617080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001630400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001642800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001661520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001675840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001690560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001702680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001714 containerd[1723]: time="2025-11-08T00:03:19.001715880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001728720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001744760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001783280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001795800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001806280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001889000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001908760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001920480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001936120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001947240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001963760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001973960Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:03:19.001948 containerd[1723]: time="2025-11-08T00:03:19.001984760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:03:19.003432 containerd[1723]: time="2025-11-08T00:03:19.002434840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:03:19.003432 containerd[1723]: time="2025-11-08T00:03:19.002602280Z" level=info msg="Connect containerd service" Nov 8 00:03:19.003432 containerd[1723]: time="2025-11-08T00:03:19.002659520Z" level=info msg="using legacy CRI server" Nov 8 00:03:19.003432 containerd[1723]: time="2025-11-08T00:03:19.002669120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:03:19.003432 containerd[1723]: time="2025-11-08T00:03:19.002796600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:03:19.004085 containerd[1723]: time="2025-11-08T00:03:19.004038720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:03:19.004529 containerd[1723]: time="2025-11-08T00:03:19.004228080Z" level=info msg="Start subscribing containerd event" Nov 8 00:03:19.004529 containerd[1723]: time="2025-11-08T00:03:19.004294000Z" level=info msg="Start recovering state" Nov 8 00:03:19.004529 containerd[1723]: time="2025-11-08T00:03:19.004371960Z" level=info msg="Start event monitor" Nov 8 00:03:19.004529 containerd[1723]: time="2025-11-08T00:03:19.004385760Z" level=info msg="Start snapshots syncer" Nov 8 00:03:19.004529 containerd[1723]: time="2025-11-08T00:03:19.004395120Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:03:19.004529 containerd[1723]: time="2025-11-08T00:03:19.004402440Z" level=info msg="Start streaming server" Nov 8 00:03:19.004529 containerd[1723]: time="2025-11-08T00:03:19.004372680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:03:19.004716 containerd[1723]: time="2025-11-08T00:03:19.004554880Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:03:19.004716 containerd[1723]: time="2025-11-08T00:03:19.004609040Z" level=info msg="containerd successfully booted in 0.252383s" Nov 8 00:03:19.004873 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:03:19.010207 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:03:19.019915 systemd[1]: Startup finished in 623ms (kernel) + 12.499s (initrd) + 36.816s (userspace) = 49.940s. Nov 8 00:03:22.661496 login[1827]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 8 00:03:22.662942 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:03:22.670327 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:03:22.679008 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:03:22.681423 systemd-logind[1678]: New session 2 of user core. Nov 8 00:03:22.720244 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:03:22.726019 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:03:22.758492 (systemd)[1836]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:03:23.661951 login[1827]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:03:23.669081 systemd-logind[1678]: New session 1 of user core. Nov 8 00:03:24.044664 systemd[1836]: Queued start job for default target default.target. Nov 8 00:03:24.055075 systemd[1836]: Created slice app.slice - User Application Slice. Nov 8 00:03:24.055099 systemd[1836]: Reached target paths.target - Paths. Nov 8 00:03:24.055110 systemd[1836]: Reached target timers.target - Timers. Nov 8 00:03:24.056327 systemd[1836]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:03:24.068331 systemd[1836]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:03:24.068437 systemd[1836]: Reached target sockets.target - Sockets. Nov 8 00:03:24.068450 systemd[1836]: Reached target basic.target - Basic System. Nov 8 00:03:24.068494 systemd[1836]: Reached target default.target - Main User Target. Nov 8 00:03:24.068519 systemd[1836]: Startup finished in 1.303s. Nov 8 00:03:24.068600 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:03:24.076301 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:03:24.077235 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:03:26.147330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:03:26.160037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:03:26.943800 waagent[1821]: 2025-11-08T00:03:26.943229Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 8 00:03:26.947923 waagent[1821]: 2025-11-08T00:03:26.947852Z INFO Daemon Daemon OS: flatcar 4081.3.6 Nov 8 00:03:26.951555 waagent[1821]: 2025-11-08T00:03:26.951479Z INFO Daemon Daemon Python: 3.11.9 Nov 8 00:03:26.955404 waagent[1821]: 2025-11-08T00:03:26.955349Z INFO Daemon Daemon Run daemon Nov 8 00:03:26.958868 waagent[1821]: 2025-11-08T00:03:26.958795Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Nov 8 00:03:26.965932 waagent[1821]: 2025-11-08T00:03:26.965866Z INFO Daemon Daemon Using waagent for provisioning Nov 8 00:03:26.971785 waagent[1821]: 2025-11-08T00:03:26.970619Z INFO Daemon Daemon Activate resource disk Nov 8 00:03:26.974684 waagent[1821]: 2025-11-08T00:03:26.974610Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 8 00:03:26.984341 waagent[1821]: 2025-11-08T00:03:26.984272Z INFO Daemon Daemon Found device: None Nov 8 00:03:26.988029 waagent[1821]: 2025-11-08T00:03:26.987971Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 8 00:03:26.995114 waagent[1821]: 2025-11-08T00:03:26.995065Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 8 00:03:27.005831 waagent[1821]: 2025-11-08T00:03:27.005741Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:03:27.010152 waagent[1821]: 2025-11-08T00:03:27.010109Z INFO Daemon Daemon Running default provisioning handler Nov 8 00:03:27.021315 waagent[1821]: 2025-11-08T00:03:27.021240Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 8 00:03:27.035420 waagent[1821]: 2025-11-08T00:03:27.035344Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 8 00:03:27.043333 waagent[1821]: 2025-11-08T00:03:27.043266Z INFO Daemon Daemon cloud-init is enabled: False Nov 8 00:03:27.047622 waagent[1821]: 2025-11-08T00:03:27.047572Z INFO Daemon Daemon Copying ovf-env.xml Nov 8 00:03:27.765680 waagent[1821]: 2025-11-08T00:03:27.764328Z INFO Daemon Daemon Successfully mounted dvd Nov 8 00:03:27.869690 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 8 00:03:27.874286 waagent[1821]: 2025-11-08T00:03:27.874207Z INFO Daemon Daemon Detect protocol endpoint Nov 8 00:03:27.878120 waagent[1821]: 2025-11-08T00:03:27.878070Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:03:27.882439 waagent[1821]: 2025-11-08T00:03:27.882385Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 8 00:03:27.887993 waagent[1821]: 2025-11-08T00:03:27.887944Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 8 00:03:27.892038 waagent[1821]: 2025-11-08T00:03:27.891991Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 8 00:03:27.896357 waagent[1821]: 2025-11-08T00:03:27.895876Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 8 00:03:28.273175 waagent[1821]: 2025-11-08T00:03:28.273119Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 8 00:03:28.278515 waagent[1821]: 2025-11-08T00:03:28.278483Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 8 00:03:28.282644 waagent[1821]: 2025-11-08T00:03:28.282592Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 8 00:03:28.725629 waagent[1821]: 2025-11-08T00:03:28.725524Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 8 00:03:28.730973 waagent[1821]: 2025-11-08T00:03:28.730901Z INFO Daemon Daemon Forcing an update of the goal state. Nov 8 00:03:28.739091 waagent[1821]: 2025-11-08T00:03:28.739043Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:03:28.771041 waagent[1821]: 2025-11-08T00:03:28.770988Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 8 00:03:28.776199 waagent[1821]: 2025-11-08T00:03:28.776142Z INFO Daemon Nov 8 00:03:28.778699 waagent[1821]: 2025-11-08T00:03:28.778649Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a6f4c70c-933b-403e-8a0c-6a496eae9ad4 eTag: 8976694897279027189 source: Fabric] Nov 8 00:03:28.788155 waagent[1821]: 2025-11-08T00:03:28.788105Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 8 00:03:28.793813 waagent[1821]: 2025-11-08T00:03:28.793747Z INFO Daemon Nov 8 00:03:28.796089 waagent[1821]: 2025-11-08T00:03:28.796045Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:03:28.806170 waagent[1821]: 2025-11-08T00:03:28.806126Z INFO Daemon Daemon Downloading artifacts profile blob Nov 8 00:03:28.879813 waagent[1821]: 2025-11-08T00:03:28.878856Z INFO Daemon Downloaded certificate {'thumbprint': '8C8AE62AFBEE6B5B07D5CD43835569C15F494052', 'hasPrivateKey': True} Nov 8 00:03:28.887050 waagent[1821]: 2025-11-08T00:03:28.886997Z INFO Daemon Fetch goal state completed Nov 8 00:03:28.897909 waagent[1821]: 2025-11-08T00:03:28.897863Z INFO Daemon Daemon Starting provisioning Nov 8 00:03:28.902121 waagent[1821]: 2025-11-08T00:03:28.902065Z INFO Daemon Daemon Handle ovf-env.xml. Nov 8 00:03:28.906017 waagent[1821]: 2025-11-08T00:03:28.905976Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-32f19bad4d] Nov 8 00:03:29.470737 waagent[1821]: 2025-11-08T00:03:29.470600Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-32f19bad4d] Nov 8 00:03:30.059782 waagent[1821]: 2025-11-08T00:03:30.056128Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 8 00:03:30.061159 waagent[1821]: 2025-11-08T00:03:30.061102Z INFO Daemon Daemon Primary interface is [eth0] Nov 8 00:03:31.837199 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:03:31.837210 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:03:31.837237 systemd-networkd[1447]: eth0: DHCP lease lost Nov 8 00:03:31.838569 waagent[1821]: 2025-11-08T00:03:31.838473Z INFO Daemon Daemon Create user account if not exists Nov 8 00:03:31.843445 waagent[1821]: 2025-11-08T00:03:31.843385Z INFO Daemon Daemon User core already exists, skip useradd Nov 8 00:03:31.849116 waagent[1821]: 2025-11-08T00:03:31.849040Z INFO Daemon Daemon Configure sudoer Nov 8 00:03:31.849201 systemd-networkd[1447]: eth0: DHCPv6 lease lost Nov 8 00:03:31.854400 waagent[1821]: 2025-11-08T00:03:31.854328Z INFO Daemon Daemon Configure sshd Nov 8 00:03:31.858359 waagent[1821]: 2025-11-08T00:03:31.858294Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 8 00:03:31.869241 waagent[1821]: 2025-11-08T00:03:31.869156Z INFO Daemon Daemon Deploy ssh public key. Nov 8 00:03:31.880804 systemd-networkd[1447]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 8 00:03:32.037684 waagent[1821]: 2025-11-08T00:03:32.037599Z INFO Daemon Daemon Provisioning complete Nov 8 00:03:32.054919 waagent[1821]: 2025-11-08T00:03:32.054868Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 8 00:03:32.066574 waagent[1821]: 2025-11-08T00:03:32.062132Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 8 00:03:32.071087 waagent[1821]: 2025-11-08T00:03:32.071024Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 8 00:03:32.122824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:03:32.133047 (kubelet)[1898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:03:32.176365 kubelet[1898]: E1108 00:03:32.176294 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:03:32.180077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:03:32.180359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:03:32.225160 waagent[1892]: 2025-11-08T00:03:32.225069Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 8 00:03:32.225526 waagent[1892]: 2025-11-08T00:03:32.225246Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Nov 8 00:03:32.225526 waagent[1892]: 2025-11-08T00:03:32.225306Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 8 00:03:32.315144 waagent[1892]: 2025-11-08T00:03:32.315046Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 8 00:03:32.315334 waagent[1892]: 2025-11-08T00:03:32.315297Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:03:32.315403 waagent[1892]: 2025-11-08T00:03:32.315373Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:03:32.324060 waagent[1892]: 2025-11-08T00:03:32.323978Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:03:32.334283 waagent[1892]: 2025-11-08T00:03:32.334203Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 8 00:03:32.334788 waagent[1892]: 2025-11-08T00:03:32.334733Z INFO ExtHandler Nov 8 00:03:32.334867 waagent[1892]: 2025-11-08T00:03:32.334834Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e1c1bea0-059a-489d-b043-9fe274cd7e2b eTag: 8976694897279027189 source: Fabric] Nov 8 00:03:32.335178 waagent[1892]: 2025-11-08T00:03:32.335136Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 8 00:03:32.335791 waagent[1892]: 2025-11-08T00:03:32.335730Z INFO ExtHandler Nov 8 00:03:32.335862 waagent[1892]: 2025-11-08T00:03:32.335832Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:03:32.339953 waagent[1892]: 2025-11-08T00:03:32.339918Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 8 00:03:32.417238 waagent[1892]: 2025-11-08T00:03:32.415697Z INFO ExtHandler Downloaded certificate {'thumbprint': '8C8AE62AFBEE6B5B07D5CD43835569C15F494052', 'hasPrivateKey': True} Nov 8 00:03:32.417238 waagent[1892]: 2025-11-08T00:03:32.416324Z INFO ExtHandler Fetch goal state completed Nov 8 00:03:32.431786 waagent[1892]: 2025-11-08T00:03:32.431286Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1892 Nov 8 00:03:32.431786 waagent[1892]: 2025-11-08T00:03:32.431451Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 8 00:03:32.433124 waagent[1892]: 2025-11-08T00:03:32.433076Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Nov 8 00:03:32.433486 waagent[1892]: 2025-11-08T00:03:32.433451Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 8 00:03:32.498245 waagent[1892]: 2025-11-08T00:03:32.498199Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 8 00:03:32.498442 waagent[1892]: 2025-11-08T00:03:32.498403Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 8 00:03:32.505251 waagent[1892]: 2025-11-08T00:03:32.505208Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 8 00:03:32.511994 systemd[1]: Reloading requested from client PID 1917 ('systemctl') (unit waagent.service)... Nov 8 00:03:32.512008 systemd[1]: Reloading... Nov 8 00:03:32.597781 zram_generator::config[1958]: No configuration found. Nov 8 00:03:32.697461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:03:32.776562 systemd[1]: Reloading finished in 264 ms. Nov 8 00:03:32.803287 waagent[1892]: 2025-11-08T00:03:32.802890Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 8 00:03:32.810122 systemd[1]: Reloading requested from client PID 2009 ('systemctl') (unit waagent.service)... Nov 8 00:03:32.810143 systemd[1]: Reloading... Nov 8 00:03:32.904790 zram_generator::config[2046]: No configuration found. Nov 8 00:03:32.999143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:03:33.074788 systemd[1]: Reloading finished in 264 ms. Nov 8 00:03:33.100611 waagent[1892]: 2025-11-08T00:03:33.099820Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 8 00:03:33.100611 waagent[1892]: 2025-11-08T00:03:33.099999Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 8 00:03:33.639792 waagent[1892]: 2025-11-08T00:03:33.639656Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 8 00:03:33.640339 waagent[1892]: 2025-11-08T00:03:33.640287Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 8 00:03:33.641160 waagent[1892]: 2025-11-08T00:03:33.641078Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 8 00:03:33.641434 waagent[1892]: 2025-11-08T00:03:33.641216Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:03:33.641651 waagent[1892]: 2025-11-08T00:03:33.641612Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:03:33.641772 waagent[1892]: 2025-11-08T00:03:33.641707Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 8 00:03:33.642290 waagent[1892]: 2025-11-08T00:03:33.642241Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 8 00:03:33.642428 waagent[1892]: 2025-11-08T00:03:33.642395Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:03:33.642510 waagent[1892]: 2025-11-08T00:03:33.642477Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:03:33.642651 waagent[1892]: 2025-11-08T00:03:33.642617Z INFO EnvHandler ExtHandler Configure routes Nov 8 00:03:33.642709 waagent[1892]: 2025-11-08T00:03:33.642682Z INFO EnvHandler ExtHandler Gateway:None Nov 8 00:03:33.642788 waagent[1892]: 2025-11-08T00:03:33.642732Z INFO EnvHandler ExtHandler Routes:None Nov 8 00:03:33.643663 waagent[1892]: 2025-11-08T00:03:33.643445Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 8 00:03:33.643899 waagent[1892]: 2025-11-08T00:03:33.643848Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 8 00:03:33.644008 waagent[1892]: 2025-11-08T00:03:33.643976Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 8 00:03:33.644611 waagent[1892]: 2025-11-08T00:03:33.644565Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 8 00:03:33.645356 waagent[1892]: 2025-11-08T00:03:33.645292Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 8 00:03:33.647058 waagent[1892]: 2025-11-08T00:03:33.646976Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 8 00:03:33.647058 waagent[1892]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 8 00:03:33.647058 waagent[1892]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 8 00:03:33.647058 waagent[1892]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 8 00:03:33.647058 waagent[1892]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:03:33.647058 waagent[1892]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:03:33.647058 waagent[1892]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:03:33.657181 waagent[1892]: 2025-11-08T00:03:33.657124Z INFO ExtHandler ExtHandler Nov 8 00:03:33.658778 waagent[1892]: 2025-11-08T00:03:33.657418Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 35115134-25ef-463b-86ff-e51f249d848b correlation 6b9af3f0-ec5f-407e-a396-b4d9c5c3f2fa created: 2025-11-08T00:01:45.298851Z] Nov 8 00:03:33.658778 waagent[1892]: 2025-11-08T00:03:33.657860Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 8 00:03:33.658778 waagent[1892]: 2025-11-08T00:03:33.658440Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 8 00:03:33.695104 waagent[1892]: 2025-11-08T00:03:33.695027Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CAB57CFD-6CAB-4781-835A-539B2A6FC7F6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 8 00:03:33.744409 waagent[1892]: 2025-11-08T00:03:33.744319Z INFO MonitorHandler ExtHandler Network interfaces: Nov 8 00:03:33.744409 waagent[1892]: Executing ['ip', '-a', '-o', 'link']: Nov 8 00:03:33.744409 waagent[1892]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 8 00:03:33.744409 waagent[1892]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:8f:51 brd ff:ff:ff:ff:ff:ff Nov 8 00:03:33.744409 waagent[1892]: 3: enP25921s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:8f:51 brd ff:ff:ff:ff:ff:ff\ altname enP25921p0s2 Nov 8 00:03:33.744409 waagent[1892]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 8 00:03:33.744409 waagent[1892]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 8 00:03:33.744409 waagent[1892]: 2: eth0 inet 10.200.20.44/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 8 00:03:33.744409 waagent[1892]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 8 00:03:33.744409 waagent[1892]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 8 00:03:33.744409 waagent[1892]: 2: eth0 inet6 fe80::20d:3aff:fefb:8f51/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 8 00:03:33.806863 waagent[1892]: 2025-11-08T00:03:33.806656Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 8 00:03:33.806863 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:03:33.806863 waagent[1892]: pkts bytes target prot opt in out source destination Nov 8 00:03:33.806863 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:03:33.806863 waagent[1892]: pkts bytes target prot opt in out source destination Nov 8 00:03:33.806863 waagent[1892]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:03:33.806863 waagent[1892]: pkts bytes target prot opt in out source destination Nov 8 00:03:33.806863 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:03:33.806863 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:03:33.806863 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:03:33.809817 waagent[1892]: 2025-11-08T00:03:33.809755Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 8 00:03:33.809817 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:03:33.809817 waagent[1892]: pkts bytes target prot opt in out source destination Nov 8 00:03:33.809817 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:03:33.809817 waagent[1892]: pkts bytes target prot opt in out source destination Nov 8 00:03:33.809817 waagent[1892]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:03:33.809817 waagent[1892]: pkts bytes target prot opt in out source destination Nov 8 00:03:33.809817 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:03:33.809817 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:03:33.809817 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:03:33.810073 waagent[1892]: 2025-11-08T00:03:33.810037Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 8 00:03:39.193856 chronyd[1700]: Selected source PHC0 Nov 8 00:03:39.436892 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 8 00:03:42.397265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:03:42.406955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:03:42.860352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:03:42.864664 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:03:42.902309 kubelet[2136]: E1108 00:03:42.902256 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:03:42.905079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:03:42.905350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:03:53.147429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:03:53.156988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:03:53.611564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:03:53.622036 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:03:53.656439 kubelet[2151]: E1108 00:03:53.656359 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:03:53.659425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:03:53.659570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:02.352548 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:04:02.361059 systemd[1]: Started sshd@0-10.200.20.44:22-10.200.16.10:46706.service - OpenSSH per-connection server daemon (10.200.16.10:46706). Nov 8 00:04:02.961710 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 46706 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:04:02.963172 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:02.967164 systemd-logind[1678]: New session 3 of user core. Nov 8 00:04:02.977995 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:04:03.080122 update_engine[1679]: I20251108 00:04:03.080060 1679 update_attempter.cc:509] Updating boot flags... Nov 8 00:04:03.150792 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2174) Nov 8 00:04:03.398057 systemd[1]: Started sshd@1-10.200.20.44:22-10.200.16.10:46710.service - OpenSSH per-connection server daemon (10.200.16.10:46710). Nov 8 00:04:03.800898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:04:03.810683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:03.887784 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 46710 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:04:03.889364 sshd[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:03.893823 systemd-logind[1678]: New session 4 of user core. Nov 8 00:04:03.901953 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:04:03.940985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:03.945510 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:04.030590 kubelet[2214]: E1108 00:04:04.030524 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:04.032675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:04.032834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:04.256014 sshd[2203]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:04.259438 systemd-logind[1678]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:04:04.260495 systemd[1]: sshd@1-10.200.20.44:22-10.200.16.10:46710.service: Deactivated successfully. Nov 8 00:04:04.262536 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:04:04.263792 systemd-logind[1678]: Removed session 4. Nov 8 00:04:04.344067 systemd[1]: Started sshd@2-10.200.20.44:22-10.200.16.10:46718.service - OpenSSH per-connection server daemon (10.200.16.10:46718). Nov 8 00:04:04.832845 sshd[2225]: Accepted publickey for core from 10.200.16.10 port 46718 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:04:04.834225 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:04.838944 systemd-logind[1678]: New session 5 of user core. Nov 8 00:04:04.845960 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:04:05.196567 sshd[2225]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:05.199499 systemd-logind[1678]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:04:05.199677 systemd[1]: sshd@2-10.200.20.44:22-10.200.16.10:46718.service: Deactivated successfully. Nov 8 00:04:05.201163 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:04:05.202926 systemd-logind[1678]: Removed session 5. Nov 8 00:04:05.278138 systemd[1]: Started sshd@3-10.200.20.44:22-10.200.16.10:46724.service - OpenSSH per-connection server daemon (10.200.16.10:46724). Nov 8 00:04:05.737061 sshd[2232]: Accepted publickey for core from 10.200.16.10 port 46724 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:04:05.738355 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:05.743499 systemd-logind[1678]: New session 6 of user core. Nov 8 00:04:05.749942 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:04:06.074928 sshd[2232]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:06.078931 systemd[1]: sshd@3-10.200.20.44:22-10.200.16.10:46724.service: Deactivated successfully. Nov 8 00:04:06.080455 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:04:06.081168 systemd-logind[1678]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:04:06.081992 systemd-logind[1678]: Removed session 6. Nov 8 00:04:06.163448 systemd[1]: Started sshd@4-10.200.20.44:22-10.200.16.10:46732.service - OpenSSH per-connection server daemon (10.200.16.10:46732). Nov 8 00:04:06.654995 sshd[2239]: Accepted publickey for core from 10.200.16.10 port 46732 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:04:06.656521 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:06.660541 systemd-logind[1678]: New session 7 of user core. Nov 8 00:04:06.670930 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:04:07.150275 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:04:07.150573 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:07.177292 sudo[2242]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:07.256511 sshd[2239]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:07.260296 systemd-logind[1678]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:04:07.260360 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:04:07.261799 systemd[1]: sshd@4-10.200.20.44:22-10.200.16.10:46732.service: Deactivated successfully. Nov 8 00:04:07.265227 systemd-logind[1678]: Removed session 7. Nov 8 00:04:07.344681 systemd[1]: Started sshd@5-10.200.20.44:22-10.200.16.10:46734.service - OpenSSH per-connection server daemon (10.200.16.10:46734). Nov 8 00:04:07.841479 sshd[2247]: Accepted publickey for core from 10.200.16.10 port 46734 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:04:07.842987 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:07.846708 systemd-logind[1678]: New session 8 of user core. Nov 8 00:04:07.853987 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:04:08.117640 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:04:08.118424 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:08.121892 sudo[2251]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:08.127072 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:04:08.127351 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:08.144993 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:08.146989 auditctl[2254]: No rules Nov 8 00:04:08.147336 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:04:08.147517 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:08.150616 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:08.176045 augenrules[2272]: No rules Nov 8 00:04:08.177686 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:08.179950 sudo[2250]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:08.270100 sshd[2247]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:08.274014 systemd[1]: sshd@5-10.200.20.44:22-10.200.16.10:46734.service: Deactivated successfully. Nov 8 00:04:08.275569 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:04:08.276231 systemd-logind[1678]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:04:08.277425 systemd-logind[1678]: Removed session 8. Nov 8 00:04:08.357476 systemd[1]: Started sshd@6-10.200.20.44:22-10.200.16.10:46746.service - OpenSSH per-connection server daemon (10.200.16.10:46746). Nov 8 00:04:08.844844 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 46746 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:04:08.846156 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:08.850067 systemd-logind[1678]: New session 9 of user core. Nov 8 00:04:08.860914 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:04:09.121312 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:04:09.121588 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:10.227004 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:04:10.227167 (dockerd)[2298]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:04:10.895636 dockerd[2298]: time="2025-11-08T00:04:10.895570219Z" level=info msg="Starting up" Nov 8 00:04:11.269951 dockerd[2298]: time="2025-11-08T00:04:11.269901557Z" level=info msg="Loading containers: start." Nov 8 00:04:11.521778 kernel: Initializing XFRM netlink socket Nov 8 00:04:11.722418 systemd-networkd[1447]: docker0: Link UP Nov 8 00:04:11.751654 dockerd[2298]: time="2025-11-08T00:04:11.751034435Z" level=info msg="Loading containers: done." Nov 8 00:04:11.776539 dockerd[2298]: time="2025-11-08T00:04:11.776486673Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:04:11.776898 dockerd[2298]: time="2025-11-08T00:04:11.776878112Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:04:11.777130 dockerd[2298]: time="2025-11-08T00:04:11.777107072Z" level=info msg="Daemon has completed initialization" Nov 8 00:04:11.844870 dockerd[2298]: time="2025-11-08T00:04:11.844795147Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:04:11.845132 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:04:12.764029 containerd[1723]: time="2025-11-08T00:04:12.763980986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:04:13.581263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104548087.mount: Deactivated successfully. Nov 8 00:04:14.147355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:04:14.156162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:14.287536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:14.299098 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:14.337414 kubelet[2478]: E1108 00:04:14.337357 2478 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:14.340143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:14.340290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:15.391787 containerd[1723]: time="2025-11-08T00:04:15.391355274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:15.400053 containerd[1723]: time="2025-11-08T00:04:15.400009034Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Nov 8 00:04:15.403776 containerd[1723]: time="2025-11-08T00:04:15.403707513Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:15.409670 containerd[1723]: time="2025-11-08T00:04:15.409609233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:15.411248 containerd[1723]: time="2025-11-08T00:04:15.410672193Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.646648607s" Nov 8 00:04:15.411248 containerd[1723]: time="2025-11-08T00:04:15.410716913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 8 00:04:15.412538 containerd[1723]: time="2025-11-08T00:04:15.412500193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:04:17.212303 containerd[1723]: time="2025-11-08T00:04:17.212241674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:17.215272 containerd[1723]: time="2025-11-08T00:04:17.215216794Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Nov 8 00:04:17.220772 containerd[1723]: time="2025-11-08T00:04:17.219632553Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:17.225836 containerd[1723]: time="2025-11-08T00:04:17.225789073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:17.227130 containerd[1723]: time="2025-11-08T00:04:17.227088913Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.814348921s" Nov 8 00:04:17.227208 containerd[1723]: time="2025-11-08T00:04:17.227129793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 8 00:04:17.227628 containerd[1723]: time="2025-11-08T00:04:17.227535673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:04:18.732280 containerd[1723]: time="2025-11-08T00:04:18.732175740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:18.735627 containerd[1723]: time="2025-11-08T00:04:18.735591300Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Nov 8 00:04:18.740092 containerd[1723]: time="2025-11-08T00:04:18.740020260Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:18.747458 containerd[1723]: time="2025-11-08T00:04:18.746756899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:18.748463 containerd[1723]: time="2025-11-08T00:04:18.748401259Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.520831906s" Nov 8 00:04:18.748559 containerd[1723]: time="2025-11-08T00:04:18.748463739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 8 00:04:18.749070 containerd[1723]: time="2025-11-08T00:04:18.749040619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:04:19.892898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582613537.mount: Deactivated successfully. Nov 8 00:04:20.744910 containerd[1723]: time="2025-11-08T00:04:20.744841254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:20.747723 containerd[1723]: time="2025-11-08T00:04:20.747667414Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Nov 8 00:04:20.751047 containerd[1723]: time="2025-11-08T00:04:20.750974134Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:20.755598 containerd[1723]: time="2025-11-08T00:04:20.755527494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:20.756478 containerd[1723]: time="2025-11-08T00:04:20.756314654Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 2.007235395s" Nov 8 00:04:20.756478 containerd[1723]: time="2025-11-08T00:04:20.756349334Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 8 00:04:20.757290 containerd[1723]: time="2025-11-08T00:04:20.757259014Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:04:22.269482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929594793.mount: Deactivated successfully. Nov 8 00:04:24.397321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 8 00:04:24.406951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:31.929576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:31.934505 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:31.969992 kubelet[2541]: E1108 00:04:31.969932 2541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:31.972890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:31.973176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:34.386597 containerd[1723]: time="2025-11-08T00:04:34.386539531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:34.389282 containerd[1723]: time="2025-11-08T00:04:34.389248731Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Nov 8 00:04:34.392672 containerd[1723]: time="2025-11-08T00:04:34.392642291Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:34.397514 containerd[1723]: time="2025-11-08T00:04:34.397459291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:34.398683 containerd[1723]: time="2025-11-08T00:04:34.398528611Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 13.641235357s" Nov 8 00:04:34.398683 containerd[1723]: time="2025-11-08T00:04:34.398569691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 8 00:04:34.399806 containerd[1723]: time="2025-11-08T00:04:34.399710331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:04:34.983942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980651097.mount: Deactivated successfully. Nov 8 00:04:35.008882 containerd[1723]: time="2025-11-08T00:04:35.008832894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:35.013302 containerd[1723]: time="2025-11-08T00:04:35.013239614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 8 00:04:35.018008 containerd[1723]: time="2025-11-08T00:04:35.017932454Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:35.022455 containerd[1723]: time="2025-11-08T00:04:35.022399893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:35.023704 containerd[1723]: time="2025-11-08T00:04:35.023150493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 623.409242ms" Nov 8 00:04:35.023704 containerd[1723]: time="2025-11-08T00:04:35.023186573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 8 00:04:35.023985 containerd[1723]: time="2025-11-08T00:04:35.023959053Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:04:35.650056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029509259.mount: Deactivated successfully. Nov 8 00:04:42.147304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 8 00:04:42.153960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:45.856695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:45.861409 (kubelet)[2631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:45.899144 kubelet[2631]: E1108 00:04:45.899071 2631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:45.902034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:45.902320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:52.802913 containerd[1723]: time="2025-11-08T00:04:52.802859002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:52.806560 containerd[1723]: time="2025-11-08T00:04:52.806166963Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Nov 8 00:04:52.811777 containerd[1723]: time="2025-11-08T00:04:52.810697684Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:52.819123 containerd[1723]: time="2025-11-08T00:04:52.819063006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:52.820485 containerd[1723]: time="2025-11-08T00:04:52.819888166Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 17.795894713s" Nov 8 00:04:52.820485 containerd[1723]: time="2025-11-08T00:04:52.819920966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 8 00:04:56.147274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Nov 8 00:04:56.155084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:56.510040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:56.511634 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:56.552761 kubelet[2693]: E1108 00:04:56.550664 2693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:56.553107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:56.553256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:58.427073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:58.445252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:58.475466 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-9.scope)... Nov 8 00:04:58.475636 systemd[1]: Reloading... Nov 8 00:04:58.604821 zram_generator::config[2751]: No configuration found. Nov 8 00:04:58.713451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:04:58.792414 systemd[1]: Reloading finished in 316 ms. Nov 8 00:04:58.841449 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:58.844265 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:04:58.844638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:58.850215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:58.991218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:59.004146 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:04:59.125673 kubelet[2818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:04:59.125673 kubelet[2818]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:04:59.125673 kubelet[2818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:04:59.125673 kubelet[2818]: I1108 00:04:59.125198 2818 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:04:59.616720 kubelet[2818]: I1108 00:04:59.616674 2818 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:04:59.616720 kubelet[2818]: I1108 00:04:59.616709 2818 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:04:59.616991 kubelet[2818]: I1108 00:04:59.616972 2818 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:04:59.634937 kubelet[2818]: E1108 00:04:59.634884 2818 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:04:59.635955 kubelet[2818]: I1108 00:04:59.635790 2818 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:04:59.648211 kubelet[2818]: E1108 00:04:59.648171 2818 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:04:59.648363 kubelet[2818]: I1108 00:04:59.648350 2818 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:04:59.651263 kubelet[2818]: I1108 00:04:59.651237 2818 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:04:59.652021 kubelet[2818]: I1108 00:04:59.651607 2818 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:04:59.652021 kubelet[2818]: I1108 00:04:59.651638 2818 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-32f19bad4d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:04:59.652021 kubelet[2818]: I1108 00:04:59.651821 2818 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:04:59.652021 kubelet[2818]: I1108 00:04:59.651830 2818 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:04:59.652021 kubelet[2818]: I1108 00:04:59.651976 2818 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:04:59.655070 kubelet[2818]: I1108 00:04:59.655042 2818 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:04:59.655177 kubelet[2818]: I1108 00:04:59.655166 2818 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:04:59.655254 kubelet[2818]: I1108 00:04:59.655244 2818 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:04:59.656621 kubelet[2818]: I1108 00:04:59.656599 2818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:04:59.657809 kubelet[2818]: E1108 00:04:59.657382 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-32f19bad4d&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:04:59.658154 kubelet[2818]: E1108 00:04:59.658123 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:04:59.658267 kubelet[2818]: I1108 00:04:59.658248 2818 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:04:59.658870 kubelet[2818]: I1108 00:04:59.658850 2818 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:04:59.658929 kubelet[2818]: W1108 00:04:59.658909 2818 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:04:59.662977 kubelet[2818]: I1108 00:04:59.662942 2818 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:04:59.663093 kubelet[2818]: I1108 00:04:59.662996 2818 server.go:1289] "Started kubelet" Nov 8 00:04:59.664290 kubelet[2818]: I1108 00:04:59.664252 2818 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:04:59.665270 kubelet[2818]: I1108 00:04:59.665244 2818 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:04:59.666329 kubelet[2818]: I1108 00:04:59.666261 2818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:04:59.667099 kubelet[2818]: I1108 00:04:59.666597 2818 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:04:59.668037 kubelet[2818]: E1108 00:04:59.666735 2818 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-32f19bad4d.1875df410c9df5ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-32f19bad4d,UID:ci-4081.3.6-n-32f19bad4d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-32f19bad4d,},FirstTimestamp:2025-11-08 00:04:59.662964139 +0000 UTC m=+0.655342113,LastTimestamp:2025-11-08 00:04:59.662964139 +0000 UTC m=+0.655342113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-32f19bad4d,}" Nov 8 00:04:59.669948 kubelet[2818]: I1108 00:04:59.669791 2818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:04:59.670583 kubelet[2818]: E1108 00:04:59.670546 2818 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:04:59.670707 kubelet[2818]: I1108 00:04:59.670689 2818 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:04:59.673989 kubelet[2818]: E1108 00:04:59.673959 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:04:59.674057 kubelet[2818]: I1108 00:04:59.673995 2818 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:04:59.674773 kubelet[2818]: I1108 00:04:59.674198 2818 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:04:59.674773 kubelet[2818]: I1108 00:04:59.674253 2818 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:04:59.675014 kubelet[2818]: I1108 00:04:59.674987 2818 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:04:59.675090 kubelet[2818]: I1108 00:04:59.675070 2818 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:04:59.675542 kubelet[2818]: E1108 00:04:59.675507 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:04:59.676428 kubelet[2818]: I1108 00:04:59.676397 2818 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:04:59.679858 kubelet[2818]: E1108 00:04:59.679728 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32f19bad4d?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="200ms" Nov 8 00:04:59.742241 kubelet[2818]: I1108 00:04:59.742196 2818 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:04:59.744033 kubelet[2818]: I1108 00:04:59.744003 2818 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:04:59.744588 kubelet[2818]: I1108 00:04:59.744176 2818 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:04:59.744588 kubelet[2818]: I1108 00:04:59.744207 2818 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:04:59.744588 kubelet[2818]: I1108 00:04:59.744216 2818 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:04:59.744588 kubelet[2818]: E1108 00:04:59.744263 2818 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:04:59.746048 kubelet[2818]: E1108 00:04:59.746002 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:04:59.774093 kubelet[2818]: E1108 00:04:59.774034 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:04:59.806887 kubelet[2818]: I1108 00:04:59.806852 2818 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:04:59.806887 kubelet[2818]: I1108 00:04:59.806876 2818 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:04:59.807037 kubelet[2818]: I1108 00:04:59.806901 2818 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:04:59.813233 kubelet[2818]: I1108 00:04:59.813204 2818 policy_none.go:49] "None policy: Start" Nov 8 00:04:59.813233 kubelet[2818]: I1108 00:04:59.813237 2818 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:04:59.813335 kubelet[2818]: I1108 00:04:59.813250 2818 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:04:59.822366 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:04:59.835549 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:04:59.844554 kubelet[2818]: E1108 00:04:59.844527 2818 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:04:59.847896 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:04:59.849781 kubelet[2818]: E1108 00:04:59.849231 2818 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:04:59.849781 kubelet[2818]: I1108 00:04:59.849432 2818 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:04:59.849781 kubelet[2818]: I1108 00:04:59.849443 2818 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:04:59.849781 kubelet[2818]: I1108 00:04:59.849707 2818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:04:59.851028 kubelet[2818]: E1108 00:04:59.851000 2818 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:04:59.851102 kubelet[2818]: E1108 00:04:59.851052 2818 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:04:59.882914 kubelet[2818]: E1108 00:04:59.881669 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32f19bad4d?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="400ms" Nov 8 00:04:59.951462 kubelet[2818]: I1108 00:04:59.951418 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:04:59.951856 kubelet[2818]: E1108 00:04:59.951812 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.058470 systemd[1]: Created slice kubepods-burstable-pod98a643384191c466991d9018d42aa59a.slice - libcontainer container kubepods-burstable-pod98a643384191c466991d9018d42aa59a.slice. Nov 8 00:05:00.073593 kubelet[2818]: E1108 00:05:00.073550 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.075805 kubelet[2818]: I1108 00:05:00.075539 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98a643384191c466991d9018d42aa59a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-32f19bad4d\" (UID: \"98a643384191c466991d9018d42aa59a\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.075805 kubelet[2818]: I1108 00:05:00.075575 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a157b6ff8105ef74e9895953cb065b4d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32f19bad4d\" (UID: \"a157b6ff8105ef74e9895953cb065b4d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.075805 kubelet[2818]: I1108 00:05:00.075592 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a157b6ff8105ef74e9895953cb065b4d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32f19bad4d\" (UID: \"a157b6ff8105ef74e9895953cb065b4d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.075805 kubelet[2818]: I1108 00:05:00.075608 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a157b6ff8105ef74e9895953cb065b4d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-32f19bad4d\" (UID: \"a157b6ff8105ef74e9895953cb065b4d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.075805 kubelet[2818]: I1108 00:05:00.075630 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.076013 kubelet[2818]: I1108 00:05:00.075644 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.076013 kubelet[2818]: I1108 00:05:00.075660 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.076013 kubelet[2818]: I1108 00:05:00.075674 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.076013 kubelet[2818]: I1108 00:05:00.075690 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.078394 systemd[1]: Created slice kubepods-burstable-poda157b6ff8105ef74e9895953cb065b4d.slice - libcontainer container kubepods-burstable-poda157b6ff8105ef74e9895953cb065b4d.slice. Nov 8 00:05:00.081065 kubelet[2818]: E1108 00:05:00.080961 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.083369 systemd[1]: Created slice kubepods-burstable-pod296aa74478b74de81323d4c9f971fe8b.slice - libcontainer container kubepods-burstable-pod296aa74478b74de81323d4c9f971fe8b.slice. Nov 8 00:05:00.085453 kubelet[2818]: E1108 00:05:00.085416 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.154298 kubelet[2818]: I1108 00:05:00.154264 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.154691 kubelet[2818]: E1108 00:05:00.154648 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.282767 kubelet[2818]: E1108 00:05:00.282720 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32f19bad4d?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="800ms" Nov 8 00:05:00.375173 containerd[1723]: time="2025-11-08T00:05:00.374863222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-32f19bad4d,Uid:98a643384191c466991d9018d42aa59a,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:00.382614 containerd[1723]: time="2025-11-08T00:05:00.382366785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-32f19bad4d,Uid:a157b6ff8105ef74e9895953cb065b4d,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:00.386387 containerd[1723]: time="2025-11-08T00:05:00.386341386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-32f19bad4d,Uid:296aa74478b74de81323d4c9f971fe8b,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:00.557006 kubelet[2818]: I1108 00:05:00.556548 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.557006 kubelet[2818]: E1108 00:05:00.556898 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:00.571460 kubelet[2818]: E1108 00:05:00.571419 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-32f19bad4d&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:05:00.662996 kubelet[2818]: E1108 00:05:00.662943 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:05:00.907484 kubelet[2818]: E1108 00:05:00.907424 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:05:01.084221 kubelet[2818]: E1108 00:05:01.084170 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32f19bad4d?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="1.6s" Nov 8 00:05:01.098658 kubelet[2818]: E1108 00:05:01.098604 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:05:01.359268 kubelet[2818]: I1108 00:05:01.359142 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:01.359817 kubelet[2818]: E1108 00:05:01.359612 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:01.690351 kubelet[2818]: E1108 00:05:01.690297 2818 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:05:02.684786 kubelet[2818]: E1108 00:05:02.684726 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32f19bad4d?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="3.2s" Nov 8 00:05:02.714337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236844023.mount: Deactivated successfully. Nov 8 00:05:02.771431 kubelet[2818]: E1108 00:05:02.771383 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:05:02.944423 containerd[1723]: time="2025-11-08T00:05:02.944298259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:02.962495 kubelet[2818]: I1108 00:05:02.962130 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:02.962720 kubelet[2818]: E1108 00:05:02.962698 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:03.053144 containerd[1723]: time="2025-11-08T00:05:03.053096976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:05:03.057900 containerd[1723]: time="2025-11-08T00:05:03.057233377Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:03.101996 containerd[1723]: time="2025-11-08T00:05:03.101947633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:03.151588 containerd[1723]: time="2025-11-08T00:05:03.150545889Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:03.154133 containerd[1723]: time="2025-11-08T00:05:03.154096250Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Nov 8 00:05:03.196919 containerd[1723]: time="2025-11-08T00:05:03.196775985Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:05:03.243540 containerd[1723]: time="2025-11-08T00:05:03.243477521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:03.244630 containerd[1723]: time="2025-11-08T00:05:03.244422601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.861976096s" Nov 8 00:05:03.245969 containerd[1723]: time="2025-11-08T00:05:03.245940202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.87100154s" Nov 8 00:05:03.247991 containerd[1723]: time="2025-11-08T00:05:03.247957962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.861549536s" Nov 8 00:05:03.393925 kubelet[2818]: E1108 00:05:03.393887 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:05:03.430509 kubelet[2818]: E1108 00:05:03.430450 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-32f19bad4d&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:05:03.703977 kubelet[2818]: E1108 00:05:03.703933 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:05:05.030958 kubelet[2818]: E1108 00:05:05.030837 2818 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-32f19bad4d.1875df410c9df5ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-32f19bad4d,UID:ci-4081.3.6-n-32f19bad4d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-32f19bad4d,},FirstTimestamp:2025-11-08 00:04:59.662964139 +0000 UTC m=+0.655342113,LastTimestamp:2025-11-08 00:04:59.662964139 +0000 UTC m=+0.655342113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-32f19bad4d,}" Nov 8 00:05:05.702842 kubelet[2818]: E1108 00:05:05.702800 2818 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:05:05.885489 kubelet[2818]: E1108 00:05:05.885444 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-32f19bad4d?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="6.4s" Nov 8 00:05:06.165260 kubelet[2818]: I1108 00:05:06.164886 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:06.165260 kubelet[2818]: E1108 00:05:06.165252 2818 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:06.968705 kubelet[2818]: E1108 00:05:06.968651 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:05:07.375024 containerd[1723]: time="2025-11-08T00:05:07.374634596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:07.375024 containerd[1723]: time="2025-11-08T00:05:07.374702196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:07.375024 containerd[1723]: time="2025-11-08T00:05:07.374718116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:07.376136 containerd[1723]: time="2025-11-08T00:05:07.375854717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:07.378518 containerd[1723]: time="2025-11-08T00:05:07.378419598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:07.379501 containerd[1723]: time="2025-11-08T00:05:07.379302759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:07.379501 containerd[1723]: time="2025-11-08T00:05:07.379324599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:07.379782 containerd[1723]: time="2025-11-08T00:05:07.379486119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:07.379782 containerd[1723]: time="2025-11-08T00:05:07.379551559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:07.379782 containerd[1723]: time="2025-11-08T00:05:07.379570159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:07.380852 containerd[1723]: time="2025-11-08T00:05:07.380770760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:07.381027 containerd[1723]: time="2025-11-08T00:05:07.380914120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:07.421966 systemd[1]: Started cri-containerd-a04e31c1cb8d8d2304deb1b5d8b7b851cb9c7aca815a006087a2e39d63e068c2.scope - libcontainer container a04e31c1cb8d8d2304deb1b5d8b7b851cb9c7aca815a006087a2e39d63e068c2. Nov 8 00:05:07.426919 systemd[1]: Started cri-containerd-1ebb48b2a83383227e7340933451d558a4b2f02f53e7cbfff48eeb09a59a1127.scope - libcontainer container 1ebb48b2a83383227e7340933451d558a4b2f02f53e7cbfff48eeb09a59a1127. Nov 8 00:05:07.428288 systemd[1]: Started cri-containerd-4e5ec9ea5936378fcda70f70e92ad286fd1b658d14b50978ed077cf0c40f8a57.scope - libcontainer container 4e5ec9ea5936378fcda70f70e92ad286fd1b658d14b50978ed077cf0c40f8a57. Nov 8 00:05:07.467019 containerd[1723]: time="2025-11-08T00:05:07.466977261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-32f19bad4d,Uid:a157b6ff8105ef74e9895953cb065b4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ebb48b2a83383227e7340933451d558a4b2f02f53e7cbfff48eeb09a59a1127\"" Nov 8 00:05:07.481376 containerd[1723]: time="2025-11-08T00:05:07.481195551Z" level=info msg="CreateContainer within sandbox \"1ebb48b2a83383227e7340933451d558a4b2f02f53e7cbfff48eeb09a59a1127\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:05:07.496106 containerd[1723]: time="2025-11-08T00:05:07.496062602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-32f19bad4d,Uid:296aa74478b74de81323d4c9f971fe8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e5ec9ea5936378fcda70f70e92ad286fd1b658d14b50978ed077cf0c40f8a57\"" Nov 8 00:05:07.498204 containerd[1723]: time="2025-11-08T00:05:07.498169163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-32f19bad4d,Uid:98a643384191c466991d9018d42aa59a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a04e31c1cb8d8d2304deb1b5d8b7b851cb9c7aca815a006087a2e39d63e068c2\"" Nov 8 00:05:07.516154 containerd[1723]: time="2025-11-08T00:05:07.516008136Z" level=info msg="CreateContainer within sandbox \"4e5ec9ea5936378fcda70f70e92ad286fd1b658d14b50978ed077cf0c40f8a57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:05:07.571418 kubelet[2818]: E1108 00:05:07.571366 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-32f19bad4d&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:05:07.577186 containerd[1723]: time="2025-11-08T00:05:07.577066779Z" level=info msg="CreateContainer within sandbox \"a04e31c1cb8d8d2304deb1b5d8b7b851cb9c7aca815a006087a2e39d63e068c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:05:07.839350 kubelet[2818]: E1108 00:05:07.839301 2818 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:05:07.933556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028975448.mount: Deactivated successfully. Nov 8 00:05:08.020976 containerd[1723]: time="2025-11-08T00:05:08.020804334Z" level=info msg="CreateContainer within sandbox \"1ebb48b2a83383227e7340933451d558a4b2f02f53e7cbfff48eeb09a59a1127\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b41cb7074e2bb9ca4b6ced291992f1a38abebf547831deaa011bbcec2c3bd8fa\"" Nov 8 00:05:08.022813 containerd[1723]: time="2025-11-08T00:05:08.021628694Z" level=info msg="StartContainer for \"b41cb7074e2bb9ca4b6ced291992f1a38abebf547831deaa011bbcec2c3bd8fa\"" Nov 8 00:05:08.042925 systemd[1]: Started cri-containerd-b41cb7074e2bb9ca4b6ced291992f1a38abebf547831deaa011bbcec2c3bd8fa.scope - libcontainer container b41cb7074e2bb9ca4b6ced291992f1a38abebf547831deaa011bbcec2c3bd8fa. Nov 8 00:05:08.116378 containerd[1723]: time="2025-11-08T00:05:08.116257361Z" level=info msg="StartContainer for \"b41cb7074e2bb9ca4b6ced291992f1a38abebf547831deaa011bbcec2c3bd8fa\" returns successfully" Nov 8 00:05:08.225051 containerd[1723]: time="2025-11-08T00:05:08.225004438Z" level=info msg="CreateContainer within sandbox \"4e5ec9ea5936378fcda70f70e92ad286fd1b658d14b50978ed077cf0c40f8a57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a7d7bad48d6b1a56017e0074a37f9882cdf880826240e68c1f5aca9ab557f21\"" Nov 8 00:05:08.225499 containerd[1723]: time="2025-11-08T00:05:08.225477519Z" level=info msg="StartContainer for \"4a7d7bad48d6b1a56017e0074a37f9882cdf880826240e68c1f5aca9ab557f21\"" Nov 8 00:05:08.248982 systemd[1]: Started cri-containerd-4a7d7bad48d6b1a56017e0074a37f9882cdf880826240e68c1f5aca9ab557f21.scope - libcontainer container 4a7d7bad48d6b1a56017e0074a37f9882cdf880826240e68c1f5aca9ab557f21. Nov 8 00:05:08.718925 containerd[1723]: time="2025-11-08T00:05:08.718813108Z" level=info msg="StartContainer for \"4a7d7bad48d6b1a56017e0074a37f9882cdf880826240e68c1f5aca9ab557f21\" returns successfully" Nov 8 00:05:08.718925 containerd[1723]: time="2025-11-08T00:05:08.718819348Z" level=info msg="CreateContainer within sandbox \"a04e31c1cb8d8d2304deb1b5d8b7b851cb9c7aca815a006087a2e39d63e068c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0bc169416f9183e6ccc1d2faeb2cd05f0daee39ee4c66774f689c835b76e0fba\"" Nov 8 00:05:08.721460 containerd[1723]: time="2025-11-08T00:05:08.721375070Z" level=info msg="StartContainer for \"0bc169416f9183e6ccc1d2faeb2cd05f0daee39ee4c66774f689c835b76e0fba\"" Nov 8 00:05:08.756978 systemd[1]: Started cri-containerd-0bc169416f9183e6ccc1d2faeb2cd05f0daee39ee4c66774f689c835b76e0fba.scope - libcontainer container 0bc169416f9183e6ccc1d2faeb2cd05f0daee39ee4c66774f689c835b76e0fba. Nov 8 00:05:08.772264 kubelet[2818]: E1108 00:05:08.770471 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:08.772897 kubelet[2818]: E1108 00:05:08.772683 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:08.893883 containerd[1723]: time="2025-11-08T00:05:08.893835192Z" level=info msg="StartContainer for \"0bc169416f9183e6ccc1d2faeb2cd05f0daee39ee4c66774f689c835b76e0fba\" returns successfully" Nov 8 00:05:09.774197 kubelet[2818]: E1108 00:05:09.774161 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:09.774546 kubelet[2818]: E1108 00:05:09.774483 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:09.775783 kubelet[2818]: E1108 00:05:09.774653 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:09.851554 kubelet[2818]: E1108 00:05:09.851487 2818 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:10.674183 kubelet[2818]: E1108 00:05:10.674142 2818 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-32f19bad4d" not found Nov 8 00:05:10.776590 kubelet[2818]: E1108 00:05:10.776255 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:10.776590 kubelet[2818]: E1108 00:05:10.776484 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:11.045270 kubelet[2818]: E1108 00:05:11.045126 2818 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-32f19bad4d" not found Nov 8 00:05:11.492081 kubelet[2818]: E1108 00:05:11.492041 2818 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-32f19bad4d" not found Nov 8 00:05:11.777353 kubelet[2818]: E1108 00:05:11.776844 2818 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:12.294072 kubelet[2818]: E1108 00:05:12.294013 2818 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-32f19bad4d\" not found" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:12.417028 kubelet[2818]: E1108 00:05:12.416996 2818 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-32f19bad4d" not found Nov 8 00:05:12.568204 kubelet[2818]: I1108 00:05:12.567768 2818 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:12.583255 kubelet[2818]: I1108 00:05:12.582268 2818 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:12.583255 kubelet[2818]: E1108 00:05:12.582307 2818 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-32f19bad4d\": node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:12.592203 kubelet[2818]: E1108 00:05:12.592170 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:12.693301 kubelet[2818]: E1108 00:05:12.693256 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:12.794231 kubelet[2818]: E1108 00:05:12.794189 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:12.895089 kubelet[2818]: E1108 00:05:12.894988 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:12.995246 kubelet[2818]: E1108 00:05:12.995206 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:13.095849 kubelet[2818]: E1108 00:05:13.095809 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:13.124160 systemd[1]: Reloading requested from client PID 3102 ('systemctl') (unit session-9.scope)... Nov 8 00:05:13.124450 systemd[1]: Reloading... Nov 8 00:05:13.196731 kubelet[2818]: E1108 00:05:13.196692 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:13.220796 zram_generator::config[3142]: No configuration found. Nov 8 00:05:13.296896 kubelet[2818]: E1108 00:05:13.296833 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:13.339107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:05:13.397597 kubelet[2818]: E1108 00:05:13.397558 2818 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-32f19bad4d\" not found" Nov 8 00:05:13.433599 systemd[1]: Reloading finished in 308 ms. Nov 8 00:05:13.467104 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:13.480814 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:05:13.481370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:13.488017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:13.668919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:13.680101 (kubelet)[3207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:05:13.727167 kubelet[3207]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:05:13.728447 kubelet[3207]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:05:13.728447 kubelet[3207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:05:13.728554 kubelet[3207]: I1108 00:05:13.728507 3207 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:05:13.736708 kubelet[3207]: I1108 00:05:13.736402 3207 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:05:13.736708 kubelet[3207]: I1108 00:05:13.736432 3207 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:05:13.737055 kubelet[3207]: I1108 00:05:13.737037 3207 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:05:13.738399 kubelet[3207]: I1108 00:05:13.738379 3207 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:05:13.740766 kubelet[3207]: I1108 00:05:13.740732 3207 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:05:13.744790 kubelet[3207]: E1108 00:05:13.744621 3207 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:05:13.744790 kubelet[3207]: I1108 00:05:13.744665 3207 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:05:13.751505 kubelet[3207]: I1108 00:05:13.751474 3207 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:05:13.751703 kubelet[3207]: I1108 00:05:13.751678 3207 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:05:13.754672 kubelet[3207]: I1108 00:05:13.751705 3207 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-32f19bad4d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:05:13.754672 kubelet[3207]: I1108 00:05:13.752911 3207 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:05:13.754672 kubelet[3207]: I1108 00:05:13.752925 3207 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:05:13.754672 kubelet[3207]: I1108 00:05:13.752974 3207 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:05:13.754672 kubelet[3207]: I1108 00:05:13.753118 3207 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:05:13.754978 kubelet[3207]: I1108 00:05:13.753132 3207 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:05:13.754978 kubelet[3207]: I1108 00:05:13.753156 3207 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:05:13.754978 kubelet[3207]: I1108 00:05:13.753168 3207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:05:13.758711 kubelet[3207]: I1108 00:05:13.757204 3207 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:05:13.760418 kubelet[3207]: I1108 00:05:13.759563 3207 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:05:13.765992 kubelet[3207]: I1108 00:05:13.765967 3207 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:05:13.766136 kubelet[3207]: I1108 00:05:13.766127 3207 server.go:1289] "Started kubelet" Nov 8 00:05:13.769181 kubelet[3207]: I1108 00:05:13.769151 3207 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:05:13.786306 kubelet[3207]: I1108 00:05:13.786256 3207 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:05:13.788539 kubelet[3207]: I1108 00:05:13.788504 3207 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:05:13.793503 kubelet[3207]: I1108 00:05:13.793447 3207 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:05:13.793660 kubelet[3207]: I1108 00:05:13.793643 3207 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:05:13.793898 kubelet[3207]: I1108 00:05:13.793870 3207 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:05:13.796002 kubelet[3207]: I1108 00:05:13.795894 3207 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:05:13.798917 kubelet[3207]: I1108 00:05:13.798897 3207 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:05:13.799032 kubelet[3207]: I1108 00:05:13.799014 3207 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:05:13.803045 kubelet[3207]: I1108 00:05:13.802996 3207 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:05:13.805851 kubelet[3207]: I1108 00:05:13.805824 3207 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:05:13.805851 kubelet[3207]: I1108 00:05:13.805849 3207 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:05:13.805966 kubelet[3207]: I1108 00:05:13.805869 3207 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:05:13.805966 kubelet[3207]: I1108 00:05:13.805875 3207 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:05:13.805966 kubelet[3207]: E1108 00:05:13.805914 3207 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:05:13.814926 kubelet[3207]: I1108 00:05:13.814884 3207 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:05:13.814926 kubelet[3207]: I1108 00:05:13.814911 3207 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:05:13.815074 kubelet[3207]: I1108 00:05:13.815007 3207 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:05:13.819007 kubelet[3207]: E1108 00:05:13.818972 3207 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:05:13.864532 kubelet[3207]: I1108 00:05:13.864505 3207 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:05:13.864532 kubelet[3207]: I1108 00:05:13.864523 3207 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:05:13.864532 kubelet[3207]: I1108 00:05:13.864543 3207 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:05:13.864715 kubelet[3207]: I1108 00:05:13.864671 3207 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:05:13.864715 kubelet[3207]: I1108 00:05:13.864682 3207 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:05:13.864715 kubelet[3207]: I1108 00:05:13.864698 3207 policy_none.go:49] "None policy: Start" Nov 8 00:05:13.864715 kubelet[3207]: I1108 00:05:13.864706 3207 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:05:13.864715 kubelet[3207]: I1108 00:05:13.864713 3207 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:05:13.864835 kubelet[3207]: I1108 00:05:13.864808 3207 state_mem.go:75] "Updated machine memory state" Nov 8 00:05:13.869258 kubelet[3207]: E1108 00:05:13.869232 3207 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:05:13.869419 kubelet[3207]: I1108 00:05:13.869402 3207 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:05:13.869451 kubelet[3207]: I1108 00:05:13.869418 3207 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:05:13.871022 kubelet[3207]: I1108 00:05:13.871002 3207 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:05:13.873317 kubelet[3207]: E1108 00:05:13.873131 3207 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:05:13.906838 kubelet[3207]: I1108 00:05:13.906806 3207 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:13.908864 kubelet[3207]: I1108 00:05:13.907219 3207 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:13.908864 kubelet[3207]: I1108 00:05:13.906873 3207 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:13.931308 3207 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:13.937891 3207 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:13.937891 3207 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:13.976736 3207 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:13.994180 3207 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:14.100410 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:14.100446 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164163 kubelet[3207]: I1108 00:05:14.100462 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164359 kubelet[3207]: I1108 00:05:14.100481 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164359 kubelet[3207]: I1108 00:05:14.100508 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98a643384191c466991d9018d42aa59a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-32f19bad4d\" (UID: \"98a643384191c466991d9018d42aa59a\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164359 kubelet[3207]: I1108 00:05:14.100533 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a157b6ff8105ef74e9895953cb065b4d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32f19bad4d\" (UID: \"a157b6ff8105ef74e9895953cb065b4d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164359 kubelet[3207]: I1108 00:05:14.100563 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/296aa74478b74de81323d4c9f971fe8b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-32f19bad4d\" (UID: \"296aa74478b74de81323d4c9f971fe8b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164359 kubelet[3207]: I1108 00:05:14.100594 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a157b6ff8105ef74e9895953cb065b4d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-32f19bad4d\" (UID: \"a157b6ff8105ef74e9895953cb065b4d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164474 kubelet[3207]: I1108 00:05:14.100613 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a157b6ff8105ef74e9895953cb065b4d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-32f19bad4d\" (UID: \"a157b6ff8105ef74e9895953cb065b4d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:17.164474 kubelet[3207]: I1108 00:05:14.754038 3207 apiserver.go:52] "Watching apiserver" Nov 8 00:05:17.164474 kubelet[3207]: I1108 00:05:14.799530 3207 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:05:17.164474 kubelet[3207]: I1108 00:05:14.824499 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-32f19bad4d" podStartSLOduration=1.824484117 podStartE2EDuration="1.824484117s" podCreationTimestamp="2025-11-08 00:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:14.824366197 +0000 UTC m=+1.140259267" watchObservedRunningTime="2025-11-08 00:05:14.824484117 +0000 UTC m=+1.140377187" Nov 8 00:05:17.164474 kubelet[3207]: I1108 00:05:14.842443 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-32f19bad4d" podStartSLOduration=1.842425078 podStartE2EDuration="1.842425078s" podCreationTimestamp="2025-11-08 00:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:14.840452318 +0000 UTC m=+1.156345388" watchObservedRunningTime="2025-11-08 00:05:14.842425078 +0000 UTC m=+1.158318148" Nov 8 00:05:17.164621 kubelet[3207]: I1108 00:05:14.860284 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-32f19bad4d" podStartSLOduration=1.85938852 podStartE2EDuration="1.85938852s" podCreationTimestamp="2025-11-08 00:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:14.8585086 +0000 UTC m=+1.174401670" watchObservedRunningTime="2025-11-08 00:05:14.85938852 +0000 UTC m=+1.175281590" Nov 8 00:05:17.164621 kubelet[3207]: I1108 00:05:17.164215 3207 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-32f19bad4d" Nov 8 00:05:18.319467 kubelet[3207]: I1108 00:05:18.319301 3207 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:05:18.320027 kubelet[3207]: I1108 00:05:18.319826 3207 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:05:18.320060 containerd[1723]: time="2025-11-08T00:05:18.319597550Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:05:19.092766 systemd[1]: Created slice kubepods-besteffort-podb2c53ebb_9ba7_4edb_ae1d_2d615442ea22.slice - libcontainer container kubepods-besteffort-podb2c53ebb_9ba7_4edb_ae1d_2d615442ea22.slice. Nov 8 00:05:19.127786 kubelet[3207]: I1108 00:05:19.126803 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2c53ebb-9ba7-4edb-ae1d-2d615442ea22-kube-proxy\") pod \"kube-proxy-ddqch\" (UID: \"b2c53ebb-9ba7-4edb-ae1d-2d615442ea22\") " pod="kube-system/kube-proxy-ddqch" Nov 8 00:05:19.127786 kubelet[3207]: I1108 00:05:19.126854 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c53ebb-9ba7-4edb-ae1d-2d615442ea22-lib-modules\") pod \"kube-proxy-ddqch\" (UID: \"b2c53ebb-9ba7-4edb-ae1d-2d615442ea22\") " pod="kube-system/kube-proxy-ddqch" Nov 8 00:05:19.127786 kubelet[3207]: I1108 00:05:19.126879 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbmlh\" (UniqueName: \"kubernetes.io/projected/b2c53ebb-9ba7-4edb-ae1d-2d615442ea22-kube-api-access-wbmlh\") pod \"kube-proxy-ddqch\" (UID: \"b2c53ebb-9ba7-4edb-ae1d-2d615442ea22\") " pod="kube-system/kube-proxy-ddqch" Nov 8 00:05:19.127786 kubelet[3207]: I1108 00:05:19.126912 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c53ebb-9ba7-4edb-ae1d-2d615442ea22-xtables-lock\") pod \"kube-proxy-ddqch\" (UID: \"b2c53ebb-9ba7-4edb-ae1d-2d615442ea22\") " pod="kube-system/kube-proxy-ddqch" Nov 8 00:05:19.290465 systemd[1]: Created slice kubepods-besteffort-pod04e8151c_6359_45cb_8c96_d3422529aaee.slice - libcontainer container kubepods-besteffort-pod04e8151c_6359_45cb_8c96_d3422529aaee.slice. Nov 8 00:05:19.327893 kubelet[3207]: I1108 00:05:19.327850 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cjqn\" (UniqueName: \"kubernetes.io/projected/04e8151c-6359-45cb-8c96-d3422529aaee-kube-api-access-2cjqn\") pod \"tigera-operator-7dcd859c48-dvxtb\" (UID: \"04e8151c-6359-45cb-8c96-d3422529aaee\") " pod="tigera-operator/tigera-operator-7dcd859c48-dvxtb" Nov 8 00:05:19.327893 kubelet[3207]: I1108 00:05:19.327894 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04e8151c-6359-45cb-8c96-d3422529aaee-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dvxtb\" (UID: \"04e8151c-6359-45cb-8c96-d3422529aaee\") " pod="tigera-operator/tigera-operator-7dcd859c48-dvxtb" Nov 8 00:05:19.399980 containerd[1723]: time="2025-11-08T00:05:19.399893610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddqch,Uid:b2c53ebb-9ba7-4edb-ae1d-2d615442ea22,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:19.594346 containerd[1723]: time="2025-11-08T00:05:19.594301675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dvxtb,Uid:04e8151c-6359-45cb-8c96-d3422529aaee,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:05:19.632017 containerd[1723]: time="2025-11-08T00:05:19.631926720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:19.632348 containerd[1723]: time="2025-11-08T00:05:19.632219040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:19.632348 containerd[1723]: time="2025-11-08T00:05:19.632268520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:19.632404 containerd[1723]: time="2025-11-08T00:05:19.632371960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:19.651920 systemd[1]: Started cri-containerd-46ed03f74434dc1aa5c280204cd781a03721933f8b3ecbc5980288d84a05a582.scope - libcontainer container 46ed03f74434dc1aa5c280204cd781a03721933f8b3ecbc5980288d84a05a582. Nov 8 00:05:19.674884 containerd[1723]: time="2025-11-08T00:05:19.674843045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddqch,Uid:b2c53ebb-9ba7-4edb-ae1d-2d615442ea22,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ed03f74434dc1aa5c280204cd781a03721933f8b3ecbc5980288d84a05a582\"" Nov 8 00:05:19.719073 containerd[1723]: time="2025-11-08T00:05:19.719028371Z" level=info msg="CreateContainer within sandbox \"46ed03f74434dc1aa5c280204cd781a03721933f8b3ecbc5980288d84a05a582\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:05:20.024433 containerd[1723]: time="2025-11-08T00:05:20.023964450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:20.024433 containerd[1723]: time="2025-11-08T00:05:20.024019370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:20.024433 containerd[1723]: time="2025-11-08T00:05:20.024032730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:20.024433 containerd[1723]: time="2025-11-08T00:05:20.024109610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:20.040081 systemd[1]: Started cri-containerd-8fdd094c3178d0fddf97c7ee4f7f8044f5c0c93c029c3d4076697528ab1a487a.scope - libcontainer container 8fdd094c3178d0fddf97c7ee4f7f8044f5c0c93c029c3d4076697528ab1a487a. Nov 8 00:05:20.073005 containerd[1723]: time="2025-11-08T00:05:20.072965376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dvxtb,Uid:04e8151c-6359-45cb-8c96-d3422529aaee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8fdd094c3178d0fddf97c7ee4f7f8044f5c0c93c029c3d4076697528ab1a487a\"" Nov 8 00:05:20.568795 containerd[1723]: time="2025-11-08T00:05:20.075670777Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:05:20.610743 containerd[1723]: time="2025-11-08T00:05:20.610695206Z" level=info msg="CreateContainer within sandbox \"46ed03f74434dc1aa5c280204cd781a03721933f8b3ecbc5980288d84a05a582\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c0b5e5f010d30be1f70e4c51292ae93716dc9133c5b8ccff247c32ea64f277d\"" Nov 8 00:05:20.612802 containerd[1723]: time="2025-11-08T00:05:20.612062206Z" level=info msg="StartContainer for \"3c0b5e5f010d30be1f70e4c51292ae93716dc9133c5b8ccff247c32ea64f277d\"" Nov 8 00:05:20.640956 systemd[1]: Started cri-containerd-3c0b5e5f010d30be1f70e4c51292ae93716dc9133c5b8ccff247c32ea64f277d.scope - libcontainer container 3c0b5e5f010d30be1f70e4c51292ae93716dc9133c5b8ccff247c32ea64f277d. Nov 8 00:05:20.672716 containerd[1723]: time="2025-11-08T00:05:20.672583094Z" level=info msg="StartContainer for \"3c0b5e5f010d30be1f70e4c51292ae93716dc9133c5b8ccff247c32ea64f277d\" returns successfully" Nov 8 00:05:20.887143 kubelet[3207]: I1108 00:05:20.886578 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ddqch" podStartSLOduration=1.8864678019999999 podStartE2EDuration="1.886467802s" podCreationTimestamp="2025-11-08 00:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:20.886149322 +0000 UTC m=+7.202042392" watchObservedRunningTime="2025-11-08 00:05:20.886467802 +0000 UTC m=+7.202360832" Nov 8 00:05:23.431387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855661724.mount: Deactivated successfully. Nov 8 00:05:25.648398 containerd[1723]: time="2025-11-08T00:05:25.648344922Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:25.652659 containerd[1723]: time="2025-11-08T00:05:25.652597723Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 8 00:05:25.657087 containerd[1723]: time="2025-11-08T00:05:25.657037163Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:25.664842 containerd[1723]: time="2025-11-08T00:05:25.664768764Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:25.665617 containerd[1723]: time="2025-11-08T00:05:25.665400164Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.589690507s" Nov 8 00:05:25.665617 containerd[1723]: time="2025-11-08T00:05:25.665435804Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 8 00:05:25.675525 containerd[1723]: time="2025-11-08T00:05:25.675406006Z" level=info msg="CreateContainer within sandbox \"8fdd094c3178d0fddf97c7ee4f7f8044f5c0c93c029c3d4076697528ab1a487a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:05:25.721098 containerd[1723]: time="2025-11-08T00:05:25.721007451Z" level=info msg="CreateContainer within sandbox \"8fdd094c3178d0fddf97c7ee4f7f8044f5c0c93c029c3d4076697528ab1a487a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d33ee50195fafc143ce5f09251f18e766b31597fd08ecccfce8cc271a9dd2db7\"" Nov 8 00:05:25.722350 containerd[1723]: time="2025-11-08T00:05:25.722301731Z" level=info msg="StartContainer for \"d33ee50195fafc143ce5f09251f18e766b31597fd08ecccfce8cc271a9dd2db7\"" Nov 8 00:05:25.756938 systemd[1]: Started cri-containerd-d33ee50195fafc143ce5f09251f18e766b31597fd08ecccfce8cc271a9dd2db7.scope - libcontainer container d33ee50195fafc143ce5f09251f18e766b31597fd08ecccfce8cc271a9dd2db7. Nov 8 00:05:25.786427 containerd[1723]: time="2025-11-08T00:05:25.786378579Z" level=info msg="StartContainer for \"d33ee50195fafc143ce5f09251f18e766b31597fd08ecccfce8cc271a9dd2db7\" returns successfully" Nov 8 00:05:32.001565 sudo[2283]: pam_unix(sudo:session): session closed for user root Nov 8 00:05:32.093068 sshd[2280]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:32.100997 systemd[1]: sshd@6-10.200.20.44:22-10.200.16.10:46746.service: Deactivated successfully. Nov 8 00:05:32.105516 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:05:32.105965 systemd[1]: session-9.scope: Consumed 7.023s CPU time, 154.0M memory peak, 0B memory swap peak. Nov 8 00:05:32.106870 systemd-logind[1678]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:05:32.110568 systemd-logind[1678]: Removed session 9. Nov 8 00:05:41.899357 kubelet[3207]: I1108 00:05:41.899290 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dvxtb" podStartSLOduration=17.307071933 podStartE2EDuration="22.899273521s" podCreationTimestamp="2025-11-08 00:05:19 +0000 UTC" firstStartedPulling="2025-11-08 00:05:20.074346577 +0000 UTC m=+6.390239647" lastFinishedPulling="2025-11-08 00:05:25.666548165 +0000 UTC m=+11.982441235" observedRunningTime="2025-11-08 00:05:25.893567952 +0000 UTC m=+12.209461022" watchObservedRunningTime="2025-11-08 00:05:41.899273521 +0000 UTC m=+28.215166591" Nov 8 00:05:41.913090 systemd[1]: Created slice kubepods-besteffort-pode21e7fc1_f6c7_4f7b_941b_b8778d44886b.slice - libcontainer container kubepods-besteffort-pode21e7fc1_f6c7_4f7b_941b_b8778d44886b.slice. Nov 8 00:05:41.971375 kubelet[3207]: I1108 00:05:41.971333 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x49qv\" (UniqueName: \"kubernetes.io/projected/e21e7fc1-f6c7-4f7b-941b-b8778d44886b-kube-api-access-x49qv\") pod \"calico-typha-7d9d85f4f8-8nzjm\" (UID: \"e21e7fc1-f6c7-4f7b-941b-b8778d44886b\") " pod="calico-system/calico-typha-7d9d85f4f8-8nzjm" Nov 8 00:05:41.971375 kubelet[3207]: I1108 00:05:41.971382 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e21e7fc1-f6c7-4f7b-941b-b8778d44886b-tigera-ca-bundle\") pod \"calico-typha-7d9d85f4f8-8nzjm\" (UID: \"e21e7fc1-f6c7-4f7b-941b-b8778d44886b\") " pod="calico-system/calico-typha-7d9d85f4f8-8nzjm" Nov 8 00:05:41.971551 kubelet[3207]: I1108 00:05:41.971401 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e21e7fc1-f6c7-4f7b-941b-b8778d44886b-typha-certs\") pod \"calico-typha-7d9d85f4f8-8nzjm\" (UID: \"e21e7fc1-f6c7-4f7b-941b-b8778d44886b\") " pod="calico-system/calico-typha-7d9d85f4f8-8nzjm" Nov 8 00:05:42.219406 containerd[1723]: time="2025-11-08T00:05:42.219270195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9d85f4f8-8nzjm,Uid:e21e7fc1-f6c7-4f7b-941b-b8778d44886b,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:42.249095 systemd[1]: Created slice kubepods-besteffort-pode83de55b_687c_411e_b20b_5fd08287dcaa.slice - libcontainer container kubepods-besteffort-pode83de55b_687c_411e_b20b_5fd08287dcaa.slice. Nov 8 00:05:42.267368 containerd[1723]: time="2025-11-08T00:05:42.267062281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:42.267368 containerd[1723]: time="2025-11-08T00:05:42.267130081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:42.267368 containerd[1723]: time="2025-11-08T00:05:42.267146841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:42.267368 containerd[1723]: time="2025-11-08T00:05:42.267228321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:42.273908 kubelet[3207]: I1108 00:05:42.273859 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-policysync\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.273908 kubelet[3207]: I1108 00:05:42.273908 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-flexvol-driver-host\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274063 kubelet[3207]: I1108 00:05:42.273927 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e83de55b-687c-411e-b20b-5fd08287dcaa-node-certs\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274063 kubelet[3207]: I1108 00:05:42.273944 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-xtables-lock\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274063 kubelet[3207]: I1108 00:05:42.273962 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-cni-bin-dir\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274063 kubelet[3207]: I1108 00:05:42.273977 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-lib-modules\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274063 kubelet[3207]: I1108 00:05:42.273998 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83de55b-687c-411e-b20b-5fd08287dcaa-tigera-ca-bundle\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274173 kubelet[3207]: I1108 00:05:42.274019 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-cni-net-dir\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274173 kubelet[3207]: I1108 00:05:42.274033 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-var-lib-calico\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274173 kubelet[3207]: I1108 00:05:42.274051 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnfz4\" (UniqueName: \"kubernetes.io/projected/e83de55b-687c-411e-b20b-5fd08287dcaa-kube-api-access-qnfz4\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274173 kubelet[3207]: I1108 00:05:42.274066 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-cni-log-dir\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.274173 kubelet[3207]: I1108 00:05:42.274084 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e83de55b-687c-411e-b20b-5fd08287dcaa-var-run-calico\") pod \"calico-node-9zvvp\" (UID: \"e83de55b-687c-411e-b20b-5fd08287dcaa\") " pod="calico-system/calico-node-9zvvp" Nov 8 00:05:42.298978 systemd[1]: Started cri-containerd-9633acaf92a1b76db73ec7f97ec38eedda21a431dc0deb2e9c26e2d341b5c2ac.scope - libcontainer container 9633acaf92a1b76db73ec7f97ec38eedda21a431dc0deb2e9c26e2d341b5c2ac. Nov 8 00:05:42.333950 containerd[1723]: time="2025-11-08T00:05:42.333512688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9d85f4f8-8nzjm,Uid:e21e7fc1-f6c7-4f7b-941b-b8778d44886b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9633acaf92a1b76db73ec7f97ec38eedda21a431dc0deb2e9c26e2d341b5c2ac\"" Nov 8 00:05:42.338536 containerd[1723]: time="2025-11-08T00:05:42.338446448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:05:42.376841 kubelet[3207]: E1108 00:05:42.376807 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.377202 kubelet[3207]: W1108 00:05:42.376989 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.377202 kubelet[3207]: E1108 00:05:42.377028 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.377459 kubelet[3207]: E1108 00:05:42.377327 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.377459 kubelet[3207]: W1108 00:05:42.377339 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.377459 kubelet[3207]: E1108 00:05:42.377356 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.377802 kubelet[3207]: E1108 00:05:42.377787 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.377991 kubelet[3207]: W1108 00:05:42.377918 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.377991 kubelet[3207]: E1108 00:05:42.377940 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.378447 kubelet[3207]: E1108 00:05:42.378246 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.378447 kubelet[3207]: W1108 00:05:42.378262 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.378447 kubelet[3207]: E1108 00:05:42.378273 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.379140 kubelet[3207]: E1108 00:05:42.378821 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.379140 kubelet[3207]: W1108 00:05:42.378837 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.379140 kubelet[3207]: E1108 00:05:42.378848 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.381689 kubelet[3207]: E1108 00:05:42.381662 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.381689 kubelet[3207]: W1108 00:05:42.381682 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.381838 kubelet[3207]: E1108 00:05:42.381696 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.432884 kubelet[3207]: E1108 00:05:42.432802 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.432884 kubelet[3207]: W1108 00:05:42.432823 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.432884 kubelet[3207]: E1108 00:05:42.432843 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.555108 containerd[1723]: time="2025-11-08T00:05:42.554988551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zvvp,Uid:e83de55b-687c-411e-b20b-5fd08287dcaa,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:42.582423 kubelet[3207]: E1108 00:05:42.582170 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:42.601264 containerd[1723]: time="2025-11-08T00:05:42.601155476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:42.601835 containerd[1723]: time="2025-11-08T00:05:42.601483116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:42.602106 containerd[1723]: time="2025-11-08T00:05:42.601924236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:42.602106 containerd[1723]: time="2025-11-08T00:05:42.602031396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:42.618014 systemd[1]: Started cri-containerd-8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702.scope - libcontainer container 8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702. Nov 8 00:05:42.651987 containerd[1723]: time="2025-11-08T00:05:42.651928162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9zvvp,Uid:e83de55b-687c-411e-b20b-5fd08287dcaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702\"" Nov 8 00:05:42.660668 kubelet[3207]: E1108 00:05:42.660636 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.660668 kubelet[3207]: W1108 00:05:42.660660 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.660839 kubelet[3207]: E1108 00:05:42.660680 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.660938 kubelet[3207]: E1108 00:05:42.660923 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.660997 kubelet[3207]: W1108 00:05:42.660935 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.661035 kubelet[3207]: E1108 00:05:42.660997 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.661191 kubelet[3207]: E1108 00:05:42.661174 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.661191 kubelet[3207]: W1108 00:05:42.661188 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.661261 kubelet[3207]: E1108 00:05:42.661198 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.661382 kubelet[3207]: E1108 00:05:42.661369 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.661382 kubelet[3207]: W1108 00:05:42.661380 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.661440 kubelet[3207]: E1108 00:05:42.661391 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.661578 kubelet[3207]: E1108 00:05:42.661565 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.661610 kubelet[3207]: W1108 00:05:42.661584 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.661610 kubelet[3207]: E1108 00:05:42.661595 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.661773 kubelet[3207]: E1108 00:05:42.661742 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.661773 kubelet[3207]: W1108 00:05:42.661767 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.661843 kubelet[3207]: E1108 00:05:42.661776 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.661942 kubelet[3207]: E1108 00:05:42.661929 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.661942 kubelet[3207]: W1108 00:05:42.661940 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.661998 kubelet[3207]: E1108 00:05:42.661949 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.662112 kubelet[3207]: E1108 00:05:42.662099 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.662145 kubelet[3207]: W1108 00:05:42.662118 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.662145 kubelet[3207]: E1108 00:05:42.662127 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.662296 kubelet[3207]: E1108 00:05:42.662283 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.662296 kubelet[3207]: W1108 00:05:42.662294 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.662355 kubelet[3207]: E1108 00:05:42.662302 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.662454 kubelet[3207]: E1108 00:05:42.662442 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.662454 kubelet[3207]: W1108 00:05:42.662452 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.662517 kubelet[3207]: E1108 00:05:42.662460 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.662611 kubelet[3207]: E1108 00:05:42.662599 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.662611 kubelet[3207]: W1108 00:05:42.662609 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.662668 kubelet[3207]: E1108 00:05:42.662617 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.662782 kubelet[3207]: E1108 00:05:42.662762 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.662782 kubelet[3207]: W1108 00:05:42.662772 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.662782 kubelet[3207]: E1108 00:05:42.662780 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.663026 kubelet[3207]: E1108 00:05:42.663012 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.663026 kubelet[3207]: W1108 00:05:42.663023 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.663084 kubelet[3207]: E1108 00:05:42.663032 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.663209 kubelet[3207]: E1108 00:05:42.663197 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.663209 kubelet[3207]: W1108 00:05:42.663209 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.663271 kubelet[3207]: E1108 00:05:42.663218 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.663399 kubelet[3207]: E1108 00:05:42.663387 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.663399 kubelet[3207]: W1108 00:05:42.663397 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.663455 kubelet[3207]: E1108 00:05:42.663414 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.663579 kubelet[3207]: E1108 00:05:42.663565 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.663579 kubelet[3207]: W1108 00:05:42.663577 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.663641 kubelet[3207]: E1108 00:05:42.663586 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.663754 kubelet[3207]: E1108 00:05:42.663736 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.663796 kubelet[3207]: W1108 00:05:42.663746 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.663796 kubelet[3207]: E1108 00:05:42.663779 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.663991 kubelet[3207]: E1108 00:05:42.663977 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.663991 kubelet[3207]: W1108 00:05:42.663990 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.664058 kubelet[3207]: E1108 00:05:42.663999 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.664158 kubelet[3207]: E1108 00:05:42.664146 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.664158 kubelet[3207]: W1108 00:05:42.664156 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.664214 kubelet[3207]: E1108 00:05:42.664165 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.664366 kubelet[3207]: E1108 00:05:42.664353 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.664366 kubelet[3207]: W1108 00:05:42.664364 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.664427 kubelet[3207]: E1108 00:05:42.664373 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.678446 kubelet[3207]: E1108 00:05:42.678358 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.678446 kubelet[3207]: W1108 00:05:42.678397 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.678446 kubelet[3207]: E1108 00:05:42.678417 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.678446 kubelet[3207]: I1108 00:05:42.678440 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b8e7681d-343c-40ff-9257-cd6bf2941900-varrun\") pod \"csi-node-driver-tffst\" (UID: \"b8e7681d-343c-40ff-9257-cd6bf2941900\") " pod="calico-system/csi-node-driver-tffst" Nov 8 00:05:42.678674 kubelet[3207]: E1108 00:05:42.678648 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.678674 kubelet[3207]: W1108 00:05:42.678659 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.678674 kubelet[3207]: E1108 00:05:42.678669 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.678743 kubelet[3207]: I1108 00:05:42.678708 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8e7681d-343c-40ff-9257-cd6bf2941900-kubelet-dir\") pod \"csi-node-driver-tffst\" (UID: \"b8e7681d-343c-40ff-9257-cd6bf2941900\") " pod="calico-system/csi-node-driver-tffst" Nov 8 00:05:42.678948 kubelet[3207]: E1108 00:05:42.678926 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.678948 kubelet[3207]: W1108 00:05:42.678943 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.679131 kubelet[3207]: E1108 00:05:42.678953 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.679131 kubelet[3207]: I1108 00:05:42.678972 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vpd\" (UniqueName: \"kubernetes.io/projected/b8e7681d-343c-40ff-9257-cd6bf2941900-kube-api-access-m2vpd\") pod \"csi-node-driver-tffst\" (UID: \"b8e7681d-343c-40ff-9257-cd6bf2941900\") " pod="calico-system/csi-node-driver-tffst" Nov 8 00:05:42.679368 kubelet[3207]: E1108 00:05:42.679221 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.679368 kubelet[3207]: W1108 00:05:42.679239 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.679368 kubelet[3207]: E1108 00:05:42.679254 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.679608 kubelet[3207]: E1108 00:05:42.679492 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.679608 kubelet[3207]: W1108 00:05:42.679502 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.679608 kubelet[3207]: E1108 00:05:42.679512 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.679774 kubelet[3207]: E1108 00:05:42.679762 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.679970 kubelet[3207]: W1108 00:05:42.679871 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.679970 kubelet[3207]: E1108 00:05:42.679890 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.680260 kubelet[3207]: E1108 00:05:42.680150 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.680260 kubelet[3207]: W1108 00:05:42.680162 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.680260 kubelet[3207]: E1108 00:05:42.680172 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.680442 kubelet[3207]: E1108 00:05:42.680403 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.680498 kubelet[3207]: W1108 00:05:42.680488 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.680549 kubelet[3207]: E1108 00:05:42.680540 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.680614 kubelet[3207]: I1108 00:05:42.680603 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b8e7681d-343c-40ff-9257-cd6bf2941900-registration-dir\") pod \"csi-node-driver-tffst\" (UID: \"b8e7681d-343c-40ff-9257-cd6bf2941900\") " pod="calico-system/csi-node-driver-tffst" Nov 8 00:05:42.680820 kubelet[3207]: E1108 00:05:42.680780 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.680820 kubelet[3207]: W1108 00:05:42.680796 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.680820 kubelet[3207]: E1108 00:05:42.680820 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.683237 kubelet[3207]: E1108 00:05:42.681040 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.683237 kubelet[3207]: W1108 00:05:42.681049 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.683237 kubelet[3207]: E1108 00:05:42.681058 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.683237 kubelet[3207]: E1108 00:05:42.681223 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.683237 kubelet[3207]: W1108 00:05:42.681233 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.683237 kubelet[3207]: E1108 00:05:42.681242 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.683237 kubelet[3207]: I1108 00:05:42.681264 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b8e7681d-343c-40ff-9257-cd6bf2941900-socket-dir\") pod \"csi-node-driver-tffst\" (UID: \"b8e7681d-343c-40ff-9257-cd6bf2941900\") " pod="calico-system/csi-node-driver-tffst" Nov 8 00:05:42.683237 kubelet[3207]: E1108 00:05:42.681392 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.683237 kubelet[3207]: W1108 00:05:42.681403 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.683459 kubelet[3207]: E1108 00:05:42.681413 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.683459 kubelet[3207]: E1108 00:05:42.681583 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.683459 kubelet[3207]: W1108 00:05:42.681591 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.683459 kubelet[3207]: E1108 00:05:42.681600 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.683459 kubelet[3207]: E1108 00:05:42.681746 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.683459 kubelet[3207]: W1108 00:05:42.681778 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.683459 kubelet[3207]: E1108 00:05:42.681786 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.683459 kubelet[3207]: E1108 00:05:42.681925 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.683459 kubelet[3207]: W1108 00:05:42.681932 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.683459 kubelet[3207]: E1108 00:05:42.681940 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.782792 kubelet[3207]: E1108 00:05:42.782672 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.782792 kubelet[3207]: W1108 00:05:42.782692 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.782792 kubelet[3207]: E1108 00:05:42.782710 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.783222 kubelet[3207]: E1108 00:05:42.783141 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.783222 kubelet[3207]: W1108 00:05:42.783154 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.783222 kubelet[3207]: E1108 00:05:42.783170 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.783461 kubelet[3207]: E1108 00:05:42.783441 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.783461 kubelet[3207]: W1108 00:05:42.783459 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.783607 kubelet[3207]: E1108 00:05:42.783472 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.783713 kubelet[3207]: E1108 00:05:42.783689 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.783713 kubelet[3207]: W1108 00:05:42.783701 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.783713 kubelet[3207]: E1108 00:05:42.783712 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.783988 kubelet[3207]: E1108 00:05:42.783974 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.784028 kubelet[3207]: W1108 00:05:42.783987 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.784028 kubelet[3207]: E1108 00:05:42.784002 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.784261 kubelet[3207]: E1108 00:05:42.784247 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.784261 kubelet[3207]: W1108 00:05:42.784260 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.784319 kubelet[3207]: E1108 00:05:42.784270 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.784461 kubelet[3207]: E1108 00:05:42.784450 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.784461 kubelet[3207]: W1108 00:05:42.784460 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.784531 kubelet[3207]: E1108 00:05:42.784471 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.784672 kubelet[3207]: E1108 00:05:42.784656 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.784672 kubelet[3207]: W1108 00:05:42.784669 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.784734 kubelet[3207]: E1108 00:05:42.784679 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.784852 kubelet[3207]: E1108 00:05:42.784841 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.784896 kubelet[3207]: W1108 00:05:42.784853 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.784896 kubelet[3207]: E1108 00:05:42.784863 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.785031 kubelet[3207]: E1108 00:05:42.785019 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.785031 kubelet[3207]: W1108 00:05:42.785030 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.785088 kubelet[3207]: E1108 00:05:42.785039 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.785195 kubelet[3207]: E1108 00:05:42.785183 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.785195 kubelet[3207]: W1108 00:05:42.785194 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.785253 kubelet[3207]: E1108 00:05:42.785203 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.785355 kubelet[3207]: E1108 00:05:42.785344 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.785355 kubelet[3207]: W1108 00:05:42.785353 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.785405 kubelet[3207]: E1108 00:05:42.785361 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.785584 kubelet[3207]: E1108 00:05:42.785571 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.785584 kubelet[3207]: W1108 00:05:42.785582 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.785648 kubelet[3207]: E1108 00:05:42.785591 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.785801 kubelet[3207]: E1108 00:05:42.785788 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.785801 kubelet[3207]: W1108 00:05:42.785799 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.785864 kubelet[3207]: E1108 00:05:42.785808 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.786033 kubelet[3207]: E1108 00:05:42.786021 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.786033 kubelet[3207]: W1108 00:05:42.786031 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.786129 kubelet[3207]: E1108 00:05:42.786044 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.786243 kubelet[3207]: E1108 00:05:42.786230 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.786243 kubelet[3207]: W1108 00:05:42.786242 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.786301 kubelet[3207]: E1108 00:05:42.786250 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.786421 kubelet[3207]: E1108 00:05:42.786408 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.786421 kubelet[3207]: W1108 00:05:42.786420 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.786567 kubelet[3207]: E1108 00:05:42.786429 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.786642 kubelet[3207]: E1108 00:05:42.786629 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.786642 kubelet[3207]: W1108 00:05:42.786640 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.786695 kubelet[3207]: E1108 00:05:42.786647 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.786932 kubelet[3207]: E1108 00:05:42.786917 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.786932 kubelet[3207]: W1108 00:05:42.786930 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.787015 kubelet[3207]: E1108 00:05:42.786940 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.787155 kubelet[3207]: E1108 00:05:42.787144 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.787194 kubelet[3207]: W1108 00:05:42.787157 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.787194 kubelet[3207]: E1108 00:05:42.787166 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.787359 kubelet[3207]: E1108 00:05:42.787347 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.787359 kubelet[3207]: W1108 00:05:42.787357 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.787488 kubelet[3207]: E1108 00:05:42.787366 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.787557 kubelet[3207]: E1108 00:05:42.787546 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.787557 kubelet[3207]: W1108 00:05:42.787555 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.787610 kubelet[3207]: E1108 00:05:42.787562 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.787975 kubelet[3207]: E1108 00:05:42.787842 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.787975 kubelet[3207]: W1108 00:05:42.787856 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.787975 kubelet[3207]: E1108 00:05:42.787868 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.788136 kubelet[3207]: E1108 00:05:42.788124 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.788185 kubelet[3207]: W1108 00:05:42.788176 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.788236 kubelet[3207]: E1108 00:05:42.788227 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.788678 kubelet[3207]: E1108 00:05:42.788657 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.788678 kubelet[3207]: W1108 00:05:42.788674 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.788822 kubelet[3207]: E1108 00:05:42.788687 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.881522 kubelet[3207]: E1108 00:05:42.881393 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.881522 kubelet[3207]: W1108 00:05:42.881410 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.881522 kubelet[3207]: E1108 00:05:42.881426 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:43.594585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330570276.mount: Deactivated successfully. Nov 8 00:05:43.809396 kubelet[3207]: E1108 00:05:43.808981 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:44.303030 containerd[1723]: time="2025-11-08T00:05:44.302973579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:44.312326 containerd[1723]: time="2025-11-08T00:05:44.312108300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 8 00:05:44.315985 containerd[1723]: time="2025-11-08T00:05:44.315928060Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:44.322531 containerd[1723]: time="2025-11-08T00:05:44.322133181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:44.323451 containerd[1723]: time="2025-11-08T00:05:44.323420941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.984903053s" Nov 8 00:05:44.324672 containerd[1723]: time="2025-11-08T00:05:44.323570901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 8 00:05:44.326014 containerd[1723]: time="2025-11-08T00:05:44.325960021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:05:44.347844 containerd[1723]: time="2025-11-08T00:05:44.347805703Z" level=info msg="CreateContainer within sandbox \"9633acaf92a1b76db73ec7f97ec38eedda21a431dc0deb2e9c26e2d341b5c2ac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:05:44.398464 containerd[1723]: time="2025-11-08T00:05:44.398321149Z" level=info msg="CreateContainer within sandbox \"9633acaf92a1b76db73ec7f97ec38eedda21a431dc0deb2e9c26e2d341b5c2ac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a2df94a90830375bcac1856e7a21ed3a3a89e0fb791b740d5aaa4bf792452960\"" Nov 8 00:05:44.400141 containerd[1723]: time="2025-11-08T00:05:44.398811509Z" level=info msg="StartContainer for \"a2df94a90830375bcac1856e7a21ed3a3a89e0fb791b740d5aaa4bf792452960\"" Nov 8 00:05:44.431969 systemd[1]: Started cri-containerd-a2df94a90830375bcac1856e7a21ed3a3a89e0fb791b740d5aaa4bf792452960.scope - libcontainer container a2df94a90830375bcac1856e7a21ed3a3a89e0fb791b740d5aaa4bf792452960. Nov 8 00:05:44.466068 containerd[1723]: time="2025-11-08T00:05:44.466018396Z" level=info msg="StartContainer for \"a2df94a90830375bcac1856e7a21ed3a3a89e0fb791b740d5aaa4bf792452960\" returns successfully" Nov 8 00:05:44.982980 kubelet[3207]: E1108 00:05:44.982950 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.982980 kubelet[3207]: W1108 00:05:44.982972 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.983432 kubelet[3207]: E1108 00:05:44.982994 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.983508 kubelet[3207]: E1108 00:05:44.983494 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.983546 kubelet[3207]: W1108 00:05:44.983507 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.983572 kubelet[3207]: E1108 00:05:44.983548 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.983729 kubelet[3207]: E1108 00:05:44.983715 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.983729 kubelet[3207]: W1108 00:05:44.983726 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.983816 kubelet[3207]: E1108 00:05:44.983734 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.983948 kubelet[3207]: E1108 00:05:44.983933 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.983948 kubelet[3207]: W1108 00:05:44.983945 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.984004 kubelet[3207]: E1108 00:05:44.983954 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.984138 kubelet[3207]: E1108 00:05:44.984124 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.984176 kubelet[3207]: W1108 00:05:44.984139 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.984176 kubelet[3207]: E1108 00:05:44.984147 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.984314 kubelet[3207]: E1108 00:05:44.984301 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.984314 kubelet[3207]: W1108 00:05:44.984311 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.984373 kubelet[3207]: E1108 00:05:44.984318 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.984488 kubelet[3207]: E1108 00:05:44.984474 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.984488 kubelet[3207]: W1108 00:05:44.984485 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.984547 kubelet[3207]: E1108 00:05:44.984492 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.984662 kubelet[3207]: E1108 00:05:44.984648 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.984662 kubelet[3207]: W1108 00:05:44.984660 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.984718 kubelet[3207]: E1108 00:05:44.984668 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.984854 kubelet[3207]: E1108 00:05:44.984840 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.984854 kubelet[3207]: W1108 00:05:44.984851 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.984917 kubelet[3207]: E1108 00:05:44.984860 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.985038 kubelet[3207]: E1108 00:05:44.985025 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.985038 kubelet[3207]: W1108 00:05:44.985035 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.985092 kubelet[3207]: E1108 00:05:44.985043 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.985209 kubelet[3207]: E1108 00:05:44.985197 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.985209 kubelet[3207]: W1108 00:05:44.985206 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.985264 kubelet[3207]: E1108 00:05:44.985214 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.985388 kubelet[3207]: E1108 00:05:44.985375 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.985429 kubelet[3207]: W1108 00:05:44.985390 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.985429 kubelet[3207]: E1108 00:05:44.985398 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.985571 kubelet[3207]: E1108 00:05:44.985556 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.985571 kubelet[3207]: W1108 00:05:44.985567 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.985631 kubelet[3207]: E1108 00:05:44.985576 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.985745 kubelet[3207]: E1108 00:05:44.985732 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.985745 kubelet[3207]: W1108 00:05:44.985742 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.985745 kubelet[3207]: E1108 00:05:44.985759 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:44.985942 kubelet[3207]: E1108 00:05:44.985928 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:44.985942 kubelet[3207]: W1108 00:05:44.985938 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:44.986002 kubelet[3207]: E1108 00:05:44.985946 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.001519 kubelet[3207]: E1108 00:05:45.001494 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.001519 kubelet[3207]: W1108 00:05:45.001514 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.001631 kubelet[3207]: E1108 00:05:45.001532 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.001738 kubelet[3207]: E1108 00:05:45.001717 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.001738 kubelet[3207]: W1108 00:05:45.001728 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.001738 kubelet[3207]: E1108 00:05:45.001737 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.001966 kubelet[3207]: E1108 00:05:45.001950 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.001966 kubelet[3207]: W1108 00:05:45.001963 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.002152 kubelet[3207]: E1108 00:05:45.001972 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.002232 kubelet[3207]: E1108 00:05:45.002217 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.002288 kubelet[3207]: W1108 00:05:45.002277 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.002477 kubelet[3207]: E1108 00:05:45.002381 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.002578 kubelet[3207]: E1108 00:05:45.002566 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.002637 kubelet[3207]: W1108 00:05:45.002626 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.002786 kubelet[3207]: E1108 00:05:45.002684 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.002896 kubelet[3207]: E1108 00:05:45.002884 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.002956 kubelet[3207]: W1108 00:05:45.002945 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.003014 kubelet[3207]: E1108 00:05:45.003004 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.003339 kubelet[3207]: E1108 00:05:45.003231 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.003339 kubelet[3207]: W1108 00:05:45.003242 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.003339 kubelet[3207]: E1108 00:05:45.003253 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.003567 kubelet[3207]: E1108 00:05:45.003554 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.003732 kubelet[3207]: W1108 00:05:45.003621 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.003732 kubelet[3207]: E1108 00:05:45.003640 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.003898 kubelet[3207]: E1108 00:05:45.003884 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.003952 kubelet[3207]: W1108 00:05:45.003940 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.004008 kubelet[3207]: E1108 00:05:45.003999 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.004217 kubelet[3207]: E1108 00:05:45.004205 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.004385 kubelet[3207]: W1108 00:05:45.004276 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.004385 kubelet[3207]: E1108 00:05:45.004291 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.004530 kubelet[3207]: E1108 00:05:45.004518 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.004589 kubelet[3207]: W1108 00:05:45.004579 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.004716 kubelet[3207]: E1108 00:05:45.004634 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.005033 kubelet[3207]: E1108 00:05:45.004909 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.005033 kubelet[3207]: W1108 00:05:45.004922 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.005033 kubelet[3207]: E1108 00:05:45.004934 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.005199 kubelet[3207]: E1108 00:05:45.005188 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.005249 kubelet[3207]: W1108 00:05:45.005239 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.005579 kubelet[3207]: E1108 00:05:45.005297 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.005579 kubelet[3207]: E1108 00:05:45.005543 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.005579 kubelet[3207]: W1108 00:05:45.005554 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.005579 kubelet[3207]: E1108 00:05:45.005565 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.005813 kubelet[3207]: E1108 00:05:45.005799 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.005854 kubelet[3207]: W1108 00:05:45.005814 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.005854 kubelet[3207]: E1108 00:05:45.005826 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.006029 kubelet[3207]: E1108 00:05:45.006013 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.006029 kubelet[3207]: W1108 00:05:45.006026 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.006089 kubelet[3207]: E1108 00:05:45.006036 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.006352 kubelet[3207]: E1108 00:05:45.006336 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.006352 kubelet[3207]: W1108 00:05:45.006349 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.006419 kubelet[3207]: E1108 00:05:45.006359 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.006549 kubelet[3207]: E1108 00:05:45.006537 3207 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:45.006578 kubelet[3207]: W1108 00:05:45.006551 3207 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:45.006578 kubelet[3207]: E1108 00:05:45.006568 3207 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:45.119603 kubelet[3207]: I1108 00:05:45.119384 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d9d85f4f8-8nzjm" podStartSLOduration=2.131193973 podStartE2EDuration="4.119365986s" podCreationTimestamp="2025-11-08 00:05:41 +0000 UTC" firstStartedPulling="2025-11-08 00:05:42.337136328 +0000 UTC m=+28.653029358" lastFinishedPulling="2025-11-08 00:05:44.325308301 +0000 UTC m=+30.641201371" observedRunningTime="2025-11-08 00:05:45.051241619 +0000 UTC m=+31.367134689" watchObservedRunningTime="2025-11-08 00:05:45.119365986 +0000 UTC m=+31.435259056" Nov 8 00:05:45.526949 containerd[1723]: time="2025-11-08T00:05:45.526899030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:45.530794 containerd[1723]: time="2025-11-08T00:05:45.530718790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 8 00:05:45.535433 containerd[1723]: time="2025-11-08T00:05:45.535146511Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:45.540279 containerd[1723]: time="2025-11-08T00:05:45.540245351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:45.540892 containerd[1723]: time="2025-11-08T00:05:45.540855271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.21485357s" Nov 8 00:05:45.540952 containerd[1723]: time="2025-11-08T00:05:45.540896031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 8 00:05:45.550771 containerd[1723]: time="2025-11-08T00:05:45.550710072Z" level=info msg="CreateContainer within sandbox \"8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:05:45.580201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070139461.mount: Deactivated successfully. Nov 8 00:05:45.593964 containerd[1723]: time="2025-11-08T00:05:45.593916077Z" level=info msg="CreateContainer within sandbox \"8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45\"" Nov 8 00:05:45.595285 containerd[1723]: time="2025-11-08T00:05:45.595154677Z" level=info msg="StartContainer for \"963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45\"" Nov 8 00:05:45.627941 systemd[1]: Started cri-containerd-963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45.scope - libcontainer container 963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45. Nov 8 00:05:45.658209 containerd[1723]: time="2025-11-08T00:05:45.658165604Z" level=info msg="StartContainer for \"963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45\" returns successfully" Nov 8 00:05:45.666597 systemd[1]: cri-containerd-963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45.scope: Deactivated successfully. Nov 8 00:05:45.693329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45-rootfs.mount: Deactivated successfully. Nov 8 00:05:45.808850 kubelet[3207]: E1108 00:05:45.808206 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:46.745701 containerd[1723]: time="2025-11-08T00:05:46.745636160Z" level=info msg="shim disconnected" id=963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45 namespace=k8s.io Nov 8 00:05:46.746299 containerd[1723]: time="2025-11-08T00:05:46.746124560Z" level=warning msg="cleaning up after shim disconnected" id=963588a5e9477eafa09bab7b84d39c46df9de065e172cd3c1bd37d2429305a45 namespace=k8s.io Nov 8 00:05:46.746299 containerd[1723]: time="2025-11-08T00:05:46.746144720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:05:46.932940 containerd[1723]: time="2025-11-08T00:05:46.932893060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:05:47.808114 kubelet[3207]: E1108 00:05:47.806889 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:49.807745 kubelet[3207]: E1108 00:05:49.807211 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:51.807636 kubelet[3207]: E1108 00:05:51.807091 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:52.725574 containerd[1723]: time="2025-11-08T00:05:52.724805154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:52.727722 containerd[1723]: time="2025-11-08T00:05:52.727690114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 8 00:05:52.774342 containerd[1723]: time="2025-11-08T00:05:52.773637964Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:52.778943 containerd[1723]: time="2025-11-08T00:05:52.778903645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:52.779693 containerd[1723]: time="2025-11-08T00:05:52.779666885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 5.846732225s" Nov 8 00:05:52.779817 containerd[1723]: time="2025-11-08T00:05:52.779799725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 8 00:05:52.821621 containerd[1723]: time="2025-11-08T00:05:52.821581134Z" level=info msg="CreateContainer within sandbox \"8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:05:53.121118 containerd[1723]: time="2025-11-08T00:05:53.121002317Z" level=info msg="CreateContainer within sandbox \"8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c\"" Nov 8 00:05:53.122834 containerd[1723]: time="2025-11-08T00:05:53.122185677Z" level=info msg="StartContainer for \"ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c\"" Nov 8 00:05:53.150951 systemd[1]: Started cri-containerd-ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c.scope - libcontainer container ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c. Nov 8 00:05:53.184928 containerd[1723]: time="2025-11-08T00:05:53.184874331Z" level=info msg="StartContainer for \"ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c\" returns successfully" Nov 8 00:05:53.808353 kubelet[3207]: E1108 00:05:53.808081 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:55.807805 kubelet[3207]: E1108 00:05:55.806825 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:05:57.807769 kubelet[3207]: E1108 00:05:57.807019 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:01.464591 kubelet[3207]: E1108 00:05:59.806215 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:01.808458 kubelet[3207]: E1108 00:06:01.807070 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:03.808230 kubelet[3207]: E1108 00:06:03.808069 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:05.807315 kubelet[3207]: E1108 00:06:05.806939 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:06.577996 containerd[1723]: time="2025-11-08T00:06:06.577944013Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:06:06.581704 systemd[1]: cri-containerd-ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c.scope: Deactivated successfully. Nov 8 00:06:06.600992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c-rootfs.mount: Deactivated successfully. Nov 8 00:06:06.611615 containerd[1723]: time="2025-11-08T00:06:06.611399453Z" level=info msg="shim disconnected" id=ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c namespace=k8s.io Nov 8 00:06:06.611615 containerd[1723]: time="2025-11-08T00:06:06.611451733Z" level=warning msg="cleaning up after shim disconnected" id=ec8ba2fa216e5035b927a9d80b613a5daecb09cef4a3cb0722491f34e658111c namespace=k8s.io Nov 8 00:06:06.611615 containerd[1723]: time="2025-11-08T00:06:06.611460253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:06:06.658372 kubelet[3207]: I1108 00:06:06.658146 3207 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:06:06.852109 systemd[1]: Created slice kubepods-burstable-podb9108a90_054b_4178_a637_f4f5bb2138bc.slice - libcontainer container kubepods-burstable-podb9108a90_054b_4178_a637_f4f5bb2138bc.slice. Nov 8 00:06:06.868583 systemd[1]: Created slice kubepods-burstable-podef9b5a81_e6ae_4009_b58b_0441376d2cb7.slice - libcontainer container kubepods-burstable-podef9b5a81_e6ae_4009_b58b_0441376d2cb7.slice. Nov 8 00:06:06.878824 systemd[1]: Created slice kubepods-besteffort-pod1d10263d_c821_45a1_a621_6034a3c06ce6.slice - libcontainer container kubepods-besteffort-pod1d10263d_c821_45a1_a621_6034a3c06ce6.slice. Nov 8 00:06:06.885017 systemd[1]: Created slice kubepods-besteffort-podb965bf6a_7220_4fbc_b608_85c677cf8e39.slice - libcontainer container kubepods-besteffort-podb965bf6a_7220_4fbc_b608_85c677cf8e39.slice. Nov 8 00:06:06.895948 systemd[1]: Created slice kubepods-besteffort-pod86741758_ba30_4cec_a95c_6af79e2546fe.slice - libcontainer container kubepods-besteffort-pod86741758_ba30_4cec_a95c_6af79e2546fe.slice. Nov 8 00:06:06.902324 systemd[1]: Created slice kubepods-besteffort-poda62546c2_2f66_4002_b0c3_d54109c52a13.slice - libcontainer container kubepods-besteffort-poda62546c2_2f66_4002_b0c3_d54109c52a13.slice. Nov 8 00:06:06.907032 systemd[1]: Created slice kubepods-besteffort-podffd8ec67_9184_4d7a_a1fa_e51a9fa4d9cf.slice - libcontainer container kubepods-besteffort-podffd8ec67_9184_4d7a_a1fa_e51a9fa4d9cf.slice. Nov 8 00:06:06.950233 kubelet[3207]: I1108 00:06:06.950132 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwsv\" (UniqueName: \"kubernetes.io/projected/ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf-kube-api-access-wnwsv\") pod \"calico-kube-controllers-55f5cd74bd-vrdmh\" (UID: \"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf\") " pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" Nov 8 00:06:06.950233 kubelet[3207]: I1108 00:06:06.950182 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlbv\" (UniqueName: \"kubernetes.io/projected/1d10263d-c821-45a1-a621-6034a3c06ce6-kube-api-access-2rlbv\") pod \"whisker-5576ffc796-sj2hl\" (UID: \"1d10263d-c821-45a1-a621-6034a3c06ce6\") " pod="calico-system/whisker-5576ffc796-sj2hl" Nov 8 00:06:06.950233 kubelet[3207]: I1108 00:06:06.950225 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxrr\" (UniqueName: \"kubernetes.io/projected/86741758-ba30-4cec-a95c-6af79e2546fe-kube-api-access-phxrr\") pod \"calico-apiserver-59cf6d9fcc-nmv8r\" (UID: \"86741758-ba30-4cec-a95c-6af79e2546fe\") " pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" Nov 8 00:06:06.950811 kubelet[3207]: I1108 00:06:06.950255 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-backend-key-pair\") pod \"whisker-5576ffc796-sj2hl\" (UID: \"1d10263d-c821-45a1-a621-6034a3c06ce6\") " pod="calico-system/whisker-5576ffc796-sj2hl" Nov 8 00:06:06.950811 kubelet[3207]: I1108 00:06:06.950276 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef9b5a81-e6ae-4009-b58b-0441376d2cb7-config-volume\") pod \"coredns-674b8bbfcf-5ctrp\" (UID: \"ef9b5a81-e6ae-4009-b58b-0441376d2cb7\") " pod="kube-system/coredns-674b8bbfcf-5ctrp" Nov 8 00:06:06.950811 kubelet[3207]: I1108 00:06:06.950292 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b965bf6a-7220-4fbc-b608-85c677cf8e39-calico-apiserver-certs\") pod \"calico-apiserver-59cf6d9fcc-xrwqb\" (UID: \"b965bf6a-7220-4fbc-b608-85c677cf8e39\") " pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" Nov 8 00:06:06.950811 kubelet[3207]: I1108 00:06:06.950308 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czxjv\" (UniqueName: \"kubernetes.io/projected/b965bf6a-7220-4fbc-b608-85c677cf8e39-kube-api-access-czxjv\") pod \"calico-apiserver-59cf6d9fcc-xrwqb\" (UID: \"b965bf6a-7220-4fbc-b608-85c677cf8e39\") " pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" Nov 8 00:06:06.950811 kubelet[3207]: I1108 00:06:06.950330 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9108a90-054b-4178-a637-f4f5bb2138bc-config-volume\") pod \"coredns-674b8bbfcf-664tp\" (UID: \"b9108a90-054b-4178-a637-f4f5bb2138bc\") " pod="kube-system/coredns-674b8bbfcf-664tp" Nov 8 00:06:06.950937 kubelet[3207]: I1108 00:06:06.950345 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a62546c2-2f66-4002-b0c3-d54109c52a13-goldmane-key-pair\") pod \"goldmane-666569f655-69dnl\" (UID: \"a62546c2-2f66-4002-b0c3-d54109c52a13\") " pod="calico-system/goldmane-666569f655-69dnl" Nov 8 00:06:06.950937 kubelet[3207]: I1108 00:06:06.950362 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qsq\" (UniqueName: \"kubernetes.io/projected/b9108a90-054b-4178-a637-f4f5bb2138bc-kube-api-access-49qsq\") pod \"coredns-674b8bbfcf-664tp\" (UID: \"b9108a90-054b-4178-a637-f4f5bb2138bc\") " pod="kube-system/coredns-674b8bbfcf-664tp" Nov 8 00:06:06.950937 kubelet[3207]: I1108 00:06:06.950379 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf-tigera-ca-bundle\") pod \"calico-kube-controllers-55f5cd74bd-vrdmh\" (UID: \"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf\") " pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" Nov 8 00:06:06.950937 kubelet[3207]: I1108 00:06:06.950394 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62546c2-2f66-4002-b0c3-d54109c52a13-config\") pod \"goldmane-666569f655-69dnl\" (UID: \"a62546c2-2f66-4002-b0c3-d54109c52a13\") " pod="calico-system/goldmane-666569f655-69dnl" Nov 8 00:06:06.950937 kubelet[3207]: I1108 00:06:06.950412 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86741758-ba30-4cec-a95c-6af79e2546fe-calico-apiserver-certs\") pod \"calico-apiserver-59cf6d9fcc-nmv8r\" (UID: \"86741758-ba30-4cec-a95c-6af79e2546fe\") " pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" Nov 8 00:06:06.951044 kubelet[3207]: I1108 00:06:06.950428 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a62546c2-2f66-4002-b0c3-d54109c52a13-goldmane-ca-bundle\") pod \"goldmane-666569f655-69dnl\" (UID: \"a62546c2-2f66-4002-b0c3-d54109c52a13\") " pod="calico-system/goldmane-666569f655-69dnl" Nov 8 00:06:06.951044 kubelet[3207]: I1108 00:06:06.950480 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-ca-bundle\") pod \"whisker-5576ffc796-sj2hl\" (UID: \"1d10263d-c821-45a1-a621-6034a3c06ce6\") " pod="calico-system/whisker-5576ffc796-sj2hl" Nov 8 00:06:06.951044 kubelet[3207]: I1108 00:06:06.950501 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5t89\" (UniqueName: \"kubernetes.io/projected/a62546c2-2f66-4002-b0c3-d54109c52a13-kube-api-access-w5t89\") pod \"goldmane-666569f655-69dnl\" (UID: \"a62546c2-2f66-4002-b0c3-d54109c52a13\") " pod="calico-system/goldmane-666569f655-69dnl" Nov 8 00:06:06.951044 kubelet[3207]: I1108 00:06:06.950521 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrnx\" (UniqueName: \"kubernetes.io/projected/ef9b5a81-e6ae-4009-b58b-0441376d2cb7-kube-api-access-gjrnx\") pod \"coredns-674b8bbfcf-5ctrp\" (UID: \"ef9b5a81-e6ae-4009-b58b-0441376d2cb7\") " pod="kube-system/coredns-674b8bbfcf-5ctrp" Nov 8 00:06:06.970137 containerd[1723]: time="2025-11-08T00:06:06.969931458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.081836 1679 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.081878 1679 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082005 1679 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082334 1679 omaha_request_params.cc:62] Current group set to lts Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082422 1679 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082432 1679 update_attempter.cc:643] Scheduling an action processor start. Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082446 1679 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082474 1679 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082518 1679 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082526 1679 omaha_request_action.cc:272] Request: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: Nov 8 00:06:07.083177 update_engine[1679]: I20251108 00:06:07.082532 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:06:07.083919 locksmithd[1791]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 8 00:06:07.086722 update_engine[1679]: I20251108 00:06:07.085217 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:06:07.086722 update_engine[1679]: I20251108 00:06:07.086673 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:06:07.132290 update_engine[1679]: E20251108 00:06:07.132180 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:06:07.133471 update_engine[1679]: I20251108 00:06:07.133445 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 8 00:06:07.158345 containerd[1723]: time="2025-11-08T00:06:07.158300661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-664tp,Uid:b9108a90-054b-4178-a637-f4f5bb2138bc,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:07.176359 containerd[1723]: time="2025-11-08T00:06:07.176037501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5ctrp,Uid:ef9b5a81-e6ae-4009-b58b-0441376d2cb7,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:07.183095 containerd[1723]: time="2025-11-08T00:06:07.182970781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5576ffc796-sj2hl,Uid:1d10263d-c821-45a1-a621-6034a3c06ce6,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:07.188776 containerd[1723]: time="2025-11-08T00:06:07.188685941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-xrwqb,Uid:b965bf6a-7220-4fbc-b608-85c677cf8e39,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:06:07.203892 containerd[1723]: time="2025-11-08T00:06:07.203603582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-nmv8r,Uid:86741758-ba30-4cec-a95c-6af79e2546fe,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:06:07.205600 containerd[1723]: time="2025-11-08T00:06:07.205550022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-69dnl,Uid:a62546c2-2f66-4002-b0c3-d54109c52a13,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:07.211032 containerd[1723]: time="2025-11-08T00:06:07.210777382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f5cd74bd-vrdmh,Uid:ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:07.250634 containerd[1723]: time="2025-11-08T00:06:07.250582302Z" level=error msg="Failed to destroy network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.251309 containerd[1723]: time="2025-11-08T00:06:07.251276502Z" level=error msg="encountered an error cleaning up failed sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.251360 containerd[1723]: time="2025-11-08T00:06:07.251340182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-664tp,Uid:b9108a90-054b-4178-a637-f4f5bb2138bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.252025 kubelet[3207]: E1108 00:06:07.251557 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.252025 kubelet[3207]: E1108 00:06:07.251628 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-664tp" Nov 8 00:06:07.252025 kubelet[3207]: E1108 00:06:07.251661 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-664tp" Nov 8 00:06:07.252177 kubelet[3207]: E1108 00:06:07.251707 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-664tp_kube-system(b9108a90-054b-4178-a637-f4f5bb2138bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-664tp_kube-system(b9108a90-054b-4178-a637-f4f5bb2138bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-664tp" podUID="b9108a90-054b-4178-a637-f4f5bb2138bc" Nov 8 00:06:07.458642 containerd[1723]: time="2025-11-08T00:06:07.458589505Z" level=error msg="Failed to destroy network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.460378 containerd[1723]: time="2025-11-08T00:06:07.460338985Z" level=error msg="Failed to destroy network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.460786 containerd[1723]: time="2025-11-08T00:06:07.460735705Z" level=error msg="encountered an error cleaning up failed sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.460856 containerd[1723]: time="2025-11-08T00:06:07.460812985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5ctrp,Uid:ef9b5a81-e6ae-4009-b58b-0441376d2cb7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.461049 kubelet[3207]: E1108 00:06:07.461007 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.461123 kubelet[3207]: E1108 00:06:07.461073 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5ctrp" Nov 8 00:06:07.461123 kubelet[3207]: E1108 00:06:07.461096 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5ctrp" Nov 8 00:06:07.461171 kubelet[3207]: E1108 00:06:07.461141 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5ctrp_kube-system(ef9b5a81-e6ae-4009-b58b-0441376d2cb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5ctrp_kube-system(ef9b5a81-e6ae-4009-b58b-0441376d2cb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5ctrp" podUID="ef9b5a81-e6ae-4009-b58b-0441376d2cb7" Nov 8 00:06:07.463548 containerd[1723]: time="2025-11-08T00:06:07.462902985Z" level=error msg="encountered an error cleaning up failed sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.463548 containerd[1723]: time="2025-11-08T00:06:07.462963905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5576ffc796-sj2hl,Uid:1d10263d-c821-45a1-a621-6034a3c06ce6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.463692 kubelet[3207]: E1108 00:06:07.463159 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.463692 kubelet[3207]: E1108 00:06:07.463233 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5576ffc796-sj2hl" Nov 8 00:06:07.463692 kubelet[3207]: E1108 00:06:07.463252 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5576ffc796-sj2hl" Nov 8 00:06:07.463918 kubelet[3207]: E1108 00:06:07.463310 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5576ffc796-sj2hl_calico-system(1d10263d-c821-45a1-a621-6034a3c06ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5576ffc796-sj2hl_calico-system(1d10263d-c821-45a1-a621-6034a3c06ce6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5576ffc796-sj2hl" podUID="1d10263d-c821-45a1-a621-6034a3c06ce6" Nov 8 00:06:07.505063 containerd[1723]: time="2025-11-08T00:06:07.505009986Z" level=error msg="Failed to destroy network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.505340 containerd[1723]: time="2025-11-08T00:06:07.505311546Z" level=error msg="Failed to destroy network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.506459 containerd[1723]: time="2025-11-08T00:06:07.506301506Z" level=error msg="encountered an error cleaning up failed sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.506773 containerd[1723]: time="2025-11-08T00:06:07.506548226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-xrwqb,Uid:b965bf6a-7220-4fbc-b608-85c677cf8e39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.506773 containerd[1723]: time="2025-11-08T00:06:07.506429026Z" level=error msg="encountered an error cleaning up failed sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.506773 containerd[1723]: time="2025-11-08T00:06:07.506655146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-nmv8r,Uid:86741758-ba30-4cec-a95c-6af79e2546fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.507056 kubelet[3207]: E1108 00:06:07.506977 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.507056 kubelet[3207]: E1108 00:06:07.507033 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" Nov 8 00:06:07.507056 kubelet[3207]: E1108 00:06:07.507058 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" Nov 8 00:06:07.507056 kubelet[3207]: E1108 00:06:07.506977 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.508336 kubelet[3207]: E1108 00:06:07.507093 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" Nov 8 00:06:07.508336 kubelet[3207]: E1108 00:06:07.507109 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" Nov 8 00:06:07.508336 kubelet[3207]: E1108 00:06:07.507115 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59cf6d9fcc-nmv8r_calico-apiserver(86741758-ba30-4cec-a95c-6af79e2546fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59cf6d9fcc-nmv8r_calico-apiserver(86741758-ba30-4cec-a95c-6af79e2546fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:06:07.508433 containerd[1723]: time="2025-11-08T00:06:07.507484706Z" level=error msg="Failed to destroy network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.508433 containerd[1723]: time="2025-11-08T00:06:07.508012066Z" level=error msg="encountered an error cleaning up failed sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.508433 containerd[1723]: time="2025-11-08T00:06:07.508081146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f5cd74bd-vrdmh,Uid:ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.508514 kubelet[3207]: E1108 00:06:07.507138 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59cf6d9fcc-xrwqb_calico-apiserver(b965bf6a-7220-4fbc-b608-85c677cf8e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59cf6d9fcc-xrwqb_calico-apiserver(b965bf6a-7220-4fbc-b608-85c677cf8e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:06:07.508514 kubelet[3207]: E1108 00:06:07.508272 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.508514 kubelet[3207]: E1108 00:06:07.508326 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" Nov 8 00:06:07.508596 kubelet[3207]: E1108 00:06:07.508348 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" Nov 8 00:06:07.508596 kubelet[3207]: E1108 00:06:07.508390 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f5cd74bd-vrdmh_calico-system(ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f5cd74bd-vrdmh_calico-system(ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:06:07.510074 containerd[1723]: time="2025-11-08T00:06:07.510035586Z" level=error msg="Failed to destroy network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.510360 containerd[1723]: time="2025-11-08T00:06:07.510332066Z" level=error msg="encountered an error cleaning up failed sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.510413 containerd[1723]: time="2025-11-08T00:06:07.510390666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-69dnl,Uid:a62546c2-2f66-4002-b0c3-d54109c52a13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.510777 kubelet[3207]: E1108 00:06:07.510609 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.510777 kubelet[3207]: E1108 00:06:07.510660 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-69dnl" Nov 8 00:06:07.510777 kubelet[3207]: E1108 00:06:07.510683 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-69dnl" Nov 8 00:06:07.510938 kubelet[3207]: E1108 00:06:07.510733 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-69dnl_calico-system(a62546c2-2f66-4002-b0c3-d54109c52a13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-69dnl_calico-system(a62546c2-2f66-4002-b0c3-d54109c52a13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:07.816252 systemd[1]: Created slice kubepods-besteffort-podb8e7681d_343c_40ff_9257_cd6bf2941900.slice - libcontainer container kubepods-besteffort-podb8e7681d_343c_40ff_9257_cd6bf2941900.slice. Nov 8 00:06:07.818841 containerd[1723]: time="2025-11-08T00:06:07.818796270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tffst,Uid:b8e7681d-343c-40ff-9257-cd6bf2941900,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:07.900841 containerd[1723]: time="2025-11-08T00:06:07.900630632Z" level=error msg="Failed to destroy network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.902172 containerd[1723]: time="2025-11-08T00:06:07.902124032Z" level=error msg="encountered an error cleaning up failed sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.902431 containerd[1723]: time="2025-11-08T00:06:07.902298232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tffst,Uid:b8e7681d-343c-40ff-9257-cd6bf2941900,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.902690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e-shm.mount: Deactivated successfully. Nov 8 00:06:07.905946 kubelet[3207]: E1108 00:06:07.905044 3207 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:07.905946 kubelet[3207]: E1108 00:06:07.905120 3207 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tffst" Nov 8 00:06:07.905946 kubelet[3207]: E1108 00:06:07.905143 3207 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tffst" Nov 8 00:06:07.906109 kubelet[3207]: E1108 00:06:07.905193 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:07.971327 kubelet[3207]: I1108 00:06:07.971289 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:06:07.974143 containerd[1723]: time="2025-11-08T00:06:07.973590433Z" level=info msg="StopPodSandbox for \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\"" Nov 8 00:06:07.974354 containerd[1723]: time="2025-11-08T00:06:07.973991353Z" level=info msg="Ensure that sandbox b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545 in task-service has been cleanup successfully" Nov 8 00:06:07.974398 kubelet[3207]: I1108 00:06:07.974218 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:06:07.975257 containerd[1723]: time="2025-11-08T00:06:07.974869593Z" level=info msg="StopPodSandbox for \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\"" Nov 8 00:06:07.975257 containerd[1723]: time="2025-11-08T00:06:07.975048873Z" level=info msg="Ensure that sandbox c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217 in task-service has been cleanup successfully" Nov 8 00:06:07.979116 kubelet[3207]: I1108 00:06:07.978808 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:06:07.979205 containerd[1723]: time="2025-11-08T00:06:07.978986633Z" level=info msg="StopPodSandbox for \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\"" Nov 8 00:06:07.979654 containerd[1723]: time="2025-11-08T00:06:07.979499953Z" level=info msg="Ensure that sandbox a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7 in task-service has been cleanup successfully" Nov 8 00:06:07.981998 kubelet[3207]: I1108 00:06:07.981975 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:07.983575 containerd[1723]: time="2025-11-08T00:06:07.982915393Z" level=info msg="StopPodSandbox for \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\"" Nov 8 00:06:07.986982 containerd[1723]: time="2025-11-08T00:06:07.986877473Z" level=info msg="Ensure that sandbox 8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba in task-service has been cleanup successfully" Nov 8 00:06:07.988949 kubelet[3207]: I1108 00:06:07.988925 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:06:07.990341 containerd[1723]: time="2025-11-08T00:06:07.990107193Z" level=info msg="StopPodSandbox for \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\"" Nov 8 00:06:07.991765 containerd[1723]: time="2025-11-08T00:06:07.990464513Z" level=info msg="Ensure that sandbox 2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e in task-service has been cleanup successfully" Nov 8 00:06:07.995579 kubelet[3207]: I1108 00:06:07.995554 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:06:07.997132 containerd[1723]: time="2025-11-08T00:06:07.997000993Z" level=info msg="StopPodSandbox for \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\"" Nov 8 00:06:07.999293 containerd[1723]: time="2025-11-08T00:06:07.999185993Z" level=info msg="Ensure that sandbox bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916 in task-service has been cleanup successfully" Nov 8 00:06:08.000167 kubelet[3207]: I1108 00:06:08.000098 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:06:08.004569 containerd[1723]: time="2025-11-08T00:06:08.004510833Z" level=info msg="StopPodSandbox for \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\"" Nov 8 00:06:08.004714 containerd[1723]: time="2025-11-08T00:06:08.004693233Z" level=info msg="Ensure that sandbox 177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930 in task-service has been cleanup successfully" Nov 8 00:06:08.013005 kubelet[3207]: I1108 00:06:08.012969 3207 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:06:08.014508 containerd[1723]: time="2025-11-08T00:06:08.014463353Z" level=info msg="StopPodSandbox for \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\"" Nov 8 00:06:08.014717 containerd[1723]: time="2025-11-08T00:06:08.014639153Z" level=info msg="Ensure that sandbox bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268 in task-service has been cleanup successfully" Nov 8 00:06:08.062810 containerd[1723]: time="2025-11-08T00:06:08.062738994Z" level=error msg="StopPodSandbox for \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\" failed" error="failed to destroy network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.063348 kubelet[3207]: E1108 00:06:08.062991 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:06:08.063348 kubelet[3207]: E1108 00:06:08.063051 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217"} Nov 8 00:06:08.063348 kubelet[3207]: E1108 00:06:08.063115 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9108a90-054b-4178-a637-f4f5bb2138bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.063348 kubelet[3207]: E1108 00:06:08.063144 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9108a90-054b-4178-a637-f4f5bb2138bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-664tp" podUID="b9108a90-054b-4178-a637-f4f5bb2138bc" Nov 8 00:06:08.065306 containerd[1723]: time="2025-11-08T00:06:08.065255154Z" level=error msg="StopPodSandbox for \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\" failed" error="failed to destroy network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.065860 kubelet[3207]: E1108 00:06:08.065807 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:06:08.065946 kubelet[3207]: E1108 00:06:08.065871 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545"} Nov 8 00:06:08.065946 kubelet[3207]: E1108 00:06:08.065916 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86741758-ba30-4cec-a95c-6af79e2546fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.066026 kubelet[3207]: E1108 00:06:08.065939 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86741758-ba30-4cec-a95c-6af79e2546fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:06:08.085403 containerd[1723]: time="2025-11-08T00:06:08.084673154Z" level=error msg="StopPodSandbox for \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\" failed" error="failed to destroy network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.086171 kubelet[3207]: E1108 00:06:08.085259 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:06:08.086171 kubelet[3207]: E1108 00:06:08.085336 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916"} Nov 8 00:06:08.086171 kubelet[3207]: E1108 00:06:08.085371 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a62546c2-2f66-4002-b0c3-d54109c52a13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.086171 kubelet[3207]: E1108 00:06:08.085395 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a62546c2-2f66-4002-b0c3-d54109c52a13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:08.137593 containerd[1723]: time="2025-11-08T00:06:08.136799235Z" level=error msg="StopPodSandbox for \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\" failed" error="failed to destroy network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.138461 kubelet[3207]: E1108 00:06:08.137632 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:06:08.138461 kubelet[3207]: E1108 00:06:08.138153 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e"} Nov 8 00:06:08.138461 kubelet[3207]: E1108 00:06:08.138194 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8e7681d-343c-40ff-9257-cd6bf2941900\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.138461 kubelet[3207]: E1108 00:06:08.138236 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8e7681d-343c-40ff-9257-cd6bf2941900\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:08.142101 containerd[1723]: time="2025-11-08T00:06:08.142053995Z" level=error msg="StopPodSandbox for \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\" failed" error="failed to destroy network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.142645 kubelet[3207]: E1108 00:06:08.142598 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:06:08.142762 kubelet[3207]: E1108 00:06:08.142655 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7"} Nov 8 00:06:08.142762 kubelet[3207]: E1108 00:06:08.142704 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.142762 kubelet[3207]: E1108 00:06:08.142735 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:06:08.149235 containerd[1723]: time="2025-11-08T00:06:08.148954155Z" level=error msg="StopPodSandbox for \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\" failed" error="failed to destroy network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.150407 kubelet[3207]: E1108 00:06:08.150368 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:08.151185 kubelet[3207]: E1108 00:06:08.151148 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba"} Nov 8 00:06:08.151316 kubelet[3207]: E1108 00:06:08.151295 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d10263d-c821-45a1-a621-6034a3c06ce6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.151448 kubelet[3207]: E1108 00:06:08.151427 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d10263d-c821-45a1-a621-6034a3c06ce6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5576ffc796-sj2hl" podUID="1d10263d-c821-45a1-a621-6034a3c06ce6" Nov 8 00:06:08.161785 containerd[1723]: time="2025-11-08T00:06:08.161198075Z" level=error msg="StopPodSandbox for \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\" failed" error="failed to destroy network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.162929 kubelet[3207]: E1108 00:06:08.162727 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:06:08.162929 kubelet[3207]: E1108 00:06:08.162852 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930"} Nov 8 00:06:08.162929 kubelet[3207]: E1108 00:06:08.162914 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b965bf6a-7220-4fbc-b608-85c677cf8e39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.164097 kubelet[3207]: E1108 00:06:08.162938 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b965bf6a-7220-4fbc-b608-85c677cf8e39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:06:08.166466 containerd[1723]: time="2025-11-08T00:06:08.166416995Z" level=error msg="StopPodSandbox for \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\" failed" error="failed to destroy network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:08.167576 kubelet[3207]: E1108 00:06:08.166969 3207 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:06:08.167576 kubelet[3207]: E1108 00:06:08.167385 3207 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268"} Nov 8 00:06:08.167576 kubelet[3207]: E1108 00:06:08.167425 3207 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef9b5a81-e6ae-4009-b58b-0441376d2cb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:08.167576 kubelet[3207]: E1108 00:06:08.167460 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef9b5a81-e6ae-4009-b58b-0441376d2cb7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5ctrp" podUID="ef9b5a81-e6ae-4009-b58b-0441376d2cb7" Nov 8 00:06:11.583782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831622528.mount: Deactivated successfully. Nov 8 00:06:11.939700 containerd[1723]: time="2025-11-08T00:06:11.939575220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:11.943668 containerd[1723]: time="2025-11-08T00:06:11.943618261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 8 00:06:11.947302 containerd[1723]: time="2025-11-08T00:06:11.947250421Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:11.952200 containerd[1723]: time="2025-11-08T00:06:11.952147781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:11.953131 containerd[1723]: time="2025-11-08T00:06:11.952701021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.982728123s" Nov 8 00:06:11.953131 containerd[1723]: time="2025-11-08T00:06:11.952736701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 8 00:06:11.974129 containerd[1723]: time="2025-11-08T00:06:11.973644504Z" level=info msg="CreateContainer within sandbox \"8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:06:12.014314 containerd[1723]: time="2025-11-08T00:06:12.014268308Z" level=info msg="CreateContainer within sandbox \"8db3f5a319835595fd440971c62d583abca70c133fdf7bf9036070b9c8fa9702\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238\"" Nov 8 00:06:12.017518 containerd[1723]: time="2025-11-08T00:06:12.015928868Z" level=info msg="StartContainer for \"710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238\"" Nov 8 00:06:12.041931 systemd[1]: Started cri-containerd-710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238.scope - libcontainer container 710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238. Nov 8 00:06:12.072959 containerd[1723]: time="2025-11-08T00:06:12.072909194Z" level=info msg="StartContainer for \"710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238\" returns successfully" Nov 8 00:06:12.537484 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:06:12.537607 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:06:12.809641 containerd[1723]: time="2025-11-08T00:06:12.809489033Z" level=info msg="StopPodSandbox for \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\"" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.017 [INFO][4431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.018 [INFO][4431] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" iface="eth0" netns="/var/run/netns/cni-4e038c91-41a5-dc5e-9633-422c096c0947" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.020 [INFO][4431] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" iface="eth0" netns="/var/run/netns/cni-4e038c91-41a5-dc5e-9633-422c096c0947" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.021 [INFO][4431] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" iface="eth0" netns="/var/run/netns/cni-4e038c91-41a5-dc5e-9633-422c096c0947" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.021 [INFO][4431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.021 [INFO][4431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.099 [INFO][4439] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.102 [INFO][4439] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.103 [INFO][4439] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.146 [WARNING][4439] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.147 [INFO][4439] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.153 [INFO][4439] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:13.160287 containerd[1723]: 2025-11-08 00:06:13.156 [INFO][4431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.164221 systemd[1]: run-netns-cni\x2d4e038c91\x2d41a5\x2ddc5e\x2d9633\x2d422c096c0947.mount: Deactivated successfully. Nov 8 00:06:13.169811 containerd[1723]: time="2025-11-08T00:06:13.169609711Z" level=info msg="TearDown network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\" successfully" Nov 8 00:06:13.169811 containerd[1723]: time="2025-11-08T00:06:13.169651391Z" level=info msg="StopPodSandbox for \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\" returns successfully" Nov 8 00:06:13.288924 kubelet[3207]: I1108 00:06:13.288881 3207 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-backend-key-pair\") pod \"1d10263d-c821-45a1-a621-6034a3c06ce6\" (UID: \"1d10263d-c821-45a1-a621-6034a3c06ce6\") " Nov 8 00:06:13.289337 kubelet[3207]: I1108 00:06:13.288994 3207 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rlbv\" (UniqueName: \"kubernetes.io/projected/1d10263d-c821-45a1-a621-6034a3c06ce6-kube-api-access-2rlbv\") pod \"1d10263d-c821-45a1-a621-6034a3c06ce6\" (UID: \"1d10263d-c821-45a1-a621-6034a3c06ce6\") " Nov 8 00:06:13.289337 kubelet[3207]: I1108 00:06:13.289021 3207 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-ca-bundle\") pod \"1d10263d-c821-45a1-a621-6034a3c06ce6\" (UID: \"1d10263d-c821-45a1-a621-6034a3c06ce6\") " Nov 8 00:06:13.291314 kubelet[3207]: I1108 00:06:13.290638 3207 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1d10263d-c821-45a1-a621-6034a3c06ce6" (UID: "1d10263d-c821-45a1-a621-6034a3c06ce6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:06:13.299444 systemd[1]: var-lib-kubelet-pods-1d10263d\x2dc821\x2d45a1\x2da621\x2d6034a3c06ce6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rlbv.mount: Deactivated successfully. Nov 8 00:06:13.299657 kubelet[3207]: I1108 00:06:13.299448 3207 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d10263d-c821-45a1-a621-6034a3c06ce6-kube-api-access-2rlbv" (OuterVolumeSpecName: "kube-api-access-2rlbv") pod "1d10263d-c821-45a1-a621-6034a3c06ce6" (UID: "1d10263d-c821-45a1-a621-6034a3c06ce6"). InnerVolumeSpecName "kube-api-access-2rlbv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:06:13.303023 kubelet[3207]: I1108 00:06:13.302958 3207 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1d10263d-c821-45a1-a621-6034a3c06ce6" (UID: "1d10263d-c821-45a1-a621-6034a3c06ce6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:06:13.303634 systemd[1]: var-lib-kubelet-pods-1d10263d\x2dc821\x2d45a1\x2da621\x2d6034a3c06ce6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:06:13.389718 kubelet[3207]: I1108 00:06:13.389634 3207 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2rlbv\" (UniqueName: \"kubernetes.io/projected/1d10263d-c821-45a1-a621-6034a3c06ce6-kube-api-access-2rlbv\") on node \"ci-4081.3.6-n-32f19bad4d\" DevicePath \"\"" Nov 8 00:06:13.389718 kubelet[3207]: I1108 00:06:13.389674 3207 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-ca-bundle\") on node \"ci-4081.3.6-n-32f19bad4d\" DevicePath \"\"" Nov 8 00:06:13.389718 kubelet[3207]: I1108 00:06:13.389687 3207 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d10263d-c821-45a1-a621-6034a3c06ce6-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-32f19bad4d\" DevicePath \"\"" Nov 8 00:06:13.818504 containerd[1723]: time="2025-11-08T00:06:13.818458820Z" level=info msg="StopPodSandbox for \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\"" Nov 8 00:06:13.831276 systemd[1]: Removed slice kubepods-besteffort-pod1d10263d_c821_45a1_a621_6034a3c06ce6.slice - libcontainer container kubepods-besteffort-pod1d10263d_c821_45a1_a621_6034a3c06ce6.slice. Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.874 [INFO][4489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.874 [INFO][4489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" iface="eth0" netns="" Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.874 [INFO][4489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.874 [INFO][4489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.911 [INFO][4497] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.911 [INFO][4497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.912 [INFO][4497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.920 [WARNING][4497] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.920 [INFO][4497] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.923 [INFO][4497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:13.926455 containerd[1723]: 2025-11-08 00:06:13.924 [INFO][4489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:13.926906 containerd[1723]: time="2025-11-08T00:06:13.926506392Z" level=info msg="TearDown network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\" successfully" Nov 8 00:06:13.926906 containerd[1723]: time="2025-11-08T00:06:13.926532512Z" level=info msg="StopPodSandbox for \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\" returns successfully" Nov 8 00:06:13.927272 containerd[1723]: time="2025-11-08T00:06:13.927240952Z" level=info msg="RemovePodSandbox for \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\"" Nov 8 00:06:13.927317 containerd[1723]: time="2025-11-08T00:06:13.927282232Z" level=info msg="Forcibly stopping sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\"" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:13.974 [INFO][4511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:13.974 [INFO][4511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" iface="eth0" netns="" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:13.974 [INFO][4511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:13.974 [INFO][4511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:14.003 [INFO][4518] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:14.003 [INFO][4518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:14.004 [INFO][4518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:14.014 [WARNING][4518] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:14.015 [INFO][4518] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" HandleID="k8s-pod-network.8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--5576ffc796--sj2hl-eth0" Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:14.016 [INFO][4518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:14.019989 containerd[1723]: 2025-11-08 00:06:14.018 [INFO][4511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba" Nov 8 00:06:14.020365 containerd[1723]: time="2025-11-08T00:06:14.020031402Z" level=info msg="TearDown network for sandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\" successfully" Nov 8 00:06:14.028007 containerd[1723]: time="2025-11-08T00:06:14.027958123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:14.028110 containerd[1723]: time="2025-11-08T00:06:14.028032683Z" level=info msg="RemovePodSandbox \"8195cec44e711e7f45c53423a0492eb2dfd6b18f063093abb1442537cb26e5ba\" returns successfully" Nov 8 00:06:14.071138 systemd[1]: run-containerd-runc-k8s.io-710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238-runc.mf4wEF.mount: Deactivated successfully. Nov 8 00:06:14.072762 kubelet[3207]: I1108 00:06:14.072655 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9zvvp" podStartSLOduration=2.772004507 podStartE2EDuration="32.072636367s" podCreationTimestamp="2025-11-08 00:05:42 +0000 UTC" firstStartedPulling="2025-11-08 00:05:42.653201402 +0000 UTC m=+28.969094472" lastFinishedPulling="2025-11-08 00:06:11.953833262 +0000 UTC m=+58.269726332" observedRunningTime="2025-11-08 00:06:13.151612869 +0000 UTC m=+59.467505939" watchObservedRunningTime="2025-11-08 00:06:14.072636367 +0000 UTC m=+60.388529437" Nov 8 00:06:14.344916 systemd[1]: Created slice kubepods-besteffort-podb8a248aa_7909_4555_958d_ace4846b2c48.slice - libcontainer container kubepods-besteffort-podb8a248aa_7909_4555_958d_ace4846b2c48.slice. Nov 8 00:06:14.397230 kubelet[3207]: I1108 00:06:14.397069 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8a248aa-7909-4555-958d-ace4846b2c48-whisker-backend-key-pair\") pod \"whisker-6f59845cf4-td2zr\" (UID: \"b8a248aa-7909-4555-958d-ace4846b2c48\") " pod="calico-system/whisker-6f59845cf4-td2zr" Nov 8 00:06:14.397230 kubelet[3207]: I1108 00:06:14.397114 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8a248aa-7909-4555-958d-ace4846b2c48-whisker-ca-bundle\") pod \"whisker-6f59845cf4-td2zr\" (UID: \"b8a248aa-7909-4555-958d-ace4846b2c48\") " pod="calico-system/whisker-6f59845cf4-td2zr" Nov 8 00:06:14.397230 kubelet[3207]: I1108 00:06:14.397138 3207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xpfh\" (UniqueName: \"kubernetes.io/projected/b8a248aa-7909-4555-958d-ace4846b2c48-kube-api-access-7xpfh\") pod \"whisker-6f59845cf4-td2zr\" (UID: \"b8a248aa-7909-4555-958d-ace4846b2c48\") " pod="calico-system/whisker-6f59845cf4-td2zr" Nov 8 00:06:14.651118 containerd[1723]: time="2025-11-08T00:06:14.650714469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f59845cf4-td2zr,Uid:b8a248aa-7909-4555-958d-ace4846b2c48,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:14.907851 kernel: bpftool[4669]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:06:15.537921 systemd-networkd[1447]: vxlan.calico: Link UP Nov 8 00:06:15.537931 systemd-networkd[1447]: vxlan.calico: Gained carrier Nov 8 00:06:15.565659 systemd-networkd[1447]: cali7bfa3601999: Link UP Nov 8 00:06:15.567138 systemd-networkd[1447]: cali7bfa3601999: Gained carrier Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.352 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0 whisker-6f59845cf4- calico-system b8a248aa-7909-4555-958d-ace4846b2c48 940 0 2025-11-08 00:06:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f59845cf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d whisker-6f59845cf4-td2zr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7bfa3601999 [] [] }} ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.352 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.404 [INFO][4695] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" HandleID="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.404 [INFO][4695] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" HandleID="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"whisker-6f59845cf4-td2zr", "timestamp":"2025-11-08 00:06:15.404671509 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.404 [INFO][4695] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.404 [INFO][4695] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.405 [INFO][4695] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.415 [INFO][4695] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.419 [INFO][4695] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.425 [INFO][4695] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.427 [INFO][4695] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.429 [INFO][4695] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.429 [INFO][4695] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.433 [INFO][4695] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865 Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.441 [INFO][4695] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.449 [INFO][4695] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.129/26] block=192.168.59.128/26 handle="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.449 [INFO][4695] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.129/26] handle="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.449 [INFO][4695] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:15.601444 containerd[1723]: 2025-11-08 00:06:15.449 [INFO][4695] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.129/26] IPv6=[] ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" HandleID="k8s-pod-network.46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Workload="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" Nov 8 00:06:15.616294 containerd[1723]: 2025-11-08 00:06:15.452 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0", GenerateName:"whisker-6f59845cf4-", Namespace:"calico-system", SelfLink:"", UID:"b8a248aa-7909-4555-958d-ace4846b2c48", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f59845cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"whisker-6f59845cf4-td2zr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7bfa3601999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:15.616294 containerd[1723]: 2025-11-08 00:06:15.452 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.129/32] ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" Nov 8 00:06:15.616294 containerd[1723]: 2025-11-08 00:06:15.452 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bfa3601999 ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" Nov 8 00:06:15.616294 containerd[1723]: 2025-11-08 00:06:15.566 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" Nov 8 00:06:15.616294 containerd[1723]: 2025-11-08 00:06:15.568 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0", GenerateName:"whisker-6f59845cf4-", Namespace:"calico-system", SelfLink:"", UID:"b8a248aa-7909-4555-958d-ace4846b2c48", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f59845cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865", Pod:"whisker-6f59845cf4-td2zr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7bfa3601999", MAC:"a2:86:0f:29:b2:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:15.616294 containerd[1723]: 2025-11-08 00:06:15.594 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865" Namespace="calico-system" Pod="whisker-6f59845cf4-td2zr" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-whisker--6f59845cf4--td2zr-eth0" Nov 8 00:06:15.787165 containerd[1723]: time="2025-11-08T00:06:15.786912830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:15.787165 containerd[1723]: time="2025-11-08T00:06:15.786994550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:15.787165 containerd[1723]: time="2025-11-08T00:06:15.787021870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:15.787844 containerd[1723]: time="2025-11-08T00:06:15.787119270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:15.810355 kubelet[3207]: I1108 00:06:15.809735 3207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d10263d-c821-45a1-a621-6034a3c06ce6" path="/var/lib/kubelet/pods/1d10263d-c821-45a1-a621-6034a3c06ce6/volumes" Nov 8 00:06:15.856957 systemd[1]: Started cri-containerd-46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865.scope - libcontainer container 46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865. Nov 8 00:06:15.896074 containerd[1723]: time="2025-11-08T00:06:15.896020682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f59845cf4-td2zr,Uid:b8a248aa-7909-4555-958d-ace4846b2c48,Namespace:calico-system,Attempt:0,} returns sandbox id \"46230df69e7f67da2f17183d6b3d37ccf9a93a48c4d303a92913e0500e312865\"" Nov 8 00:06:15.900639 containerd[1723]: time="2025-11-08T00:06:15.900601762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:06:16.176848 containerd[1723]: time="2025-11-08T00:06:16.176632232Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:16.179960 containerd[1723]: time="2025-11-08T00:06:16.179845152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:06:16.179960 containerd[1723]: time="2025-11-08T00:06:16.179861272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:06:16.180132 kubelet[3207]: E1108 00:06:16.180089 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:16.180190 kubelet[3207]: E1108 00:06:16.180145 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:16.181422 kubelet[3207]: E1108 00:06:16.181349 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41c4d131d342475c83c899a448c04516,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:16.183573 containerd[1723]: time="2025-11-08T00:06:16.183538392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:06:16.478948 containerd[1723]: time="2025-11-08T00:06:16.478764264Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:16.481904 containerd[1723]: time="2025-11-08T00:06:16.481765064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:06:16.481904 containerd[1723]: time="2025-11-08T00:06:16.481830984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:16.482066 kubelet[3207]: E1108 00:06:16.482014 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:16.482066 kubelet[3207]: E1108 00:06:16.482060 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:16.482228 kubelet[3207]: E1108 00:06:16.482175 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:16.483662 kubelet[3207]: E1108 00:06:16.483564 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:06:17.014899 systemd-networkd[1447]: vxlan.calico: Gained IPv6LL Nov 8 00:06:17.058962 kubelet[3207]: E1108 00:06:17.058916 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:06:17.081008 update_engine[1679]: I20251108 00:06:17.080796 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:06:17.081358 update_engine[1679]: I20251108 00:06:17.081043 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:06:17.081358 update_engine[1679]: I20251108 00:06:17.081254 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:06:17.184190 update_engine[1679]: E20251108 00:06:17.184128 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:06:17.184325 update_engine[1679]: I20251108 00:06:17.184226 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 8 00:06:17.270929 systemd-networkd[1447]: cali7bfa3601999: Gained IPv6LL Nov 8 00:06:18.807146 containerd[1723]: time="2025-11-08T00:06:18.807095992Z" level=info msg="StopPodSandbox for \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\"" Nov 8 00:06:18.808926 containerd[1723]: time="2025-11-08T00:06:18.807361952Z" level=info msg="StopPodSandbox for \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\"" Nov 8 00:06:18.809688 containerd[1723]: time="2025-11-08T00:06:18.807416432Z" level=info msg="StopPodSandbox for \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\"" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.920 [INFO][4849] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.921 [INFO][4849] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" iface="eth0" netns="/var/run/netns/cni-99f2d1e4-36cf-f0bc-78cc-e05cf11f00e2" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.921 [INFO][4849] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" iface="eth0" netns="/var/run/netns/cni-99f2d1e4-36cf-f0bc-78cc-e05cf11f00e2" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.924 [INFO][4849] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" iface="eth0" netns="/var/run/netns/cni-99f2d1e4-36cf-f0bc-78cc-e05cf11f00e2" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.924 [INFO][4849] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.924 [INFO][4849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.952 [INFO][4871] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.952 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.952 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.964 [WARNING][4871] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.965 [INFO][4871] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.968 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:18.973884 containerd[1723]: 2025-11-08 00:06:18.971 [INFO][4849] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:06:18.975606 containerd[1723]: time="2025-11-08T00:06:18.974832730Z" level=info msg="TearDown network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\" successfully" Nov 8 00:06:18.980344 containerd[1723]: time="2025-11-08T00:06:18.977969850Z" level=info msg="StopPodSandbox for \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\" returns successfully" Nov 8 00:06:18.979366 systemd[1]: run-netns-cni\x2d99f2d1e4\x2d36cf\x2df0bc\x2d78cc\x2de05cf11f00e2.mount: Deactivated successfully. Nov 8 00:06:18.984325 containerd[1723]: time="2025-11-08T00:06:18.984209331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-nmv8r,Uid:86741758-ba30-4cec-a95c-6af79e2546fe,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.911 [INFO][4842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.912 [INFO][4842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" iface="eth0" netns="/var/run/netns/cni-b7be0b5e-1b2b-a653-0f2c-0b970b2d2383" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.912 [INFO][4842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" iface="eth0" netns="/var/run/netns/cni-b7be0b5e-1b2b-a653-0f2c-0b970b2d2383" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.914 [INFO][4842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" iface="eth0" netns="/var/run/netns/cni-b7be0b5e-1b2b-a653-0f2c-0b970b2d2383" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.914 [INFO][4842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.914 [INFO][4842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.952 [INFO][4866] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.956 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.968 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.993 [WARNING][4866] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.993 [INFO][4866] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:18.997 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:19.003206 containerd[1723]: 2025-11-08 00:06:19.001 [INFO][4842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:06:19.005132 containerd[1723]: time="2025-11-08T00:06:19.004891493Z" level=info msg="TearDown network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\" successfully" Nov 8 00:06:19.005132 containerd[1723]: time="2025-11-08T00:06:19.004936413Z" level=info msg="StopPodSandbox for \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\" returns successfully" Nov 8 00:06:19.007925 systemd[1]: run-netns-cni\x2db7be0b5e\x2d1b2b\x2da653\x2d0f2c\x2d0b970b2d2383.mount: Deactivated successfully. Nov 8 00:06:19.009197 containerd[1723]: time="2025-11-08T00:06:19.008161493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-664tp,Uid:b9108a90-054b-4178-a637-f4f5bb2138bc,Namespace:kube-system,Attempt:1,}" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.957 [INFO][4850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.957 [INFO][4850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" iface="eth0" netns="/var/run/netns/cni-395cb1bb-7a3f-e4cb-1c71-97627b67520a" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.957 [INFO][4850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" iface="eth0" netns="/var/run/netns/cni-395cb1bb-7a3f-e4cb-1c71-97627b67520a" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.958 [INFO][4850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" iface="eth0" netns="/var/run/netns/cni-395cb1bb-7a3f-e4cb-1c71-97627b67520a" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.958 [INFO][4850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.958 [INFO][4850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.998 [INFO][4879] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.998 [INFO][4879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:18.998 [INFO][4879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:19.018 [WARNING][4879] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:19.018 [INFO][4879] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:19.020 [INFO][4879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:19.025145 containerd[1723]: 2025-11-08 00:06:19.022 [INFO][4850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:06:19.028238 containerd[1723]: time="2025-11-08T00:06:19.025343655Z" level=info msg="TearDown network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\" successfully" Nov 8 00:06:19.028238 containerd[1723]: time="2025-11-08T00:06:19.025371335Z" level=info msg="StopPodSandbox for \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\" returns successfully" Nov 8 00:06:19.027670 systemd[1]: run-netns-cni\x2d395cb1bb\x2d7a3f\x2de4cb\x2d1c71\x2d97627b67520a.mount: Deactivated successfully. Nov 8 00:06:19.028970 containerd[1723]: time="2025-11-08T00:06:19.028923176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5ctrp,Uid:ef9b5a81-e6ae-4009-b58b-0441376d2cb7,Namespace:kube-system,Attempt:1,}" Nov 8 00:06:19.229268 systemd-networkd[1447]: calidaee1f8ee43: Link UP Nov 8 00:06:19.234034 systemd-networkd[1447]: calidaee1f8ee43: Gained carrier Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.105 [INFO][4886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0 calico-apiserver-59cf6d9fcc- calico-apiserver 86741758-ba30-4cec-a95c-6af79e2546fe 969 0 2025-11-08 00:05:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59cf6d9fcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d calico-apiserver-59cf6d9fcc-nmv8r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidaee1f8ee43 [] [] }} ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.106 [INFO][4886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.150 [INFO][4908] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" HandleID="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.150 [INFO][4908] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" HandleID="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"calico-apiserver-59cf6d9fcc-nmv8r", "timestamp":"2025-11-08 00:06:19.150239629 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.150 [INFO][4908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.150 [INFO][4908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.150 [INFO][4908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.166 [INFO][4908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.176 [INFO][4908] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.183 [INFO][4908] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.186 [INFO][4908] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.191 [INFO][4908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.191 [INFO][4908] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.193 [INFO][4908] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.203 [INFO][4908] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.219 [INFO][4908] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.130/26] block=192.168.59.128/26 handle="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.219 [INFO][4908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.130/26] handle="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.219 [INFO][4908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:19.261234 containerd[1723]: 2025-11-08 00:06:19.219 [INFO][4908] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.130/26] IPv6=[] ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" HandleID="k8s-pod-network.549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:19.261829 containerd[1723]: 2025-11-08 00:06:19.221 [INFO][4886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"86741758-ba30-4cec-a95c-6af79e2546fe", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"calico-apiserver-59cf6d9fcc-nmv8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaee1f8ee43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:19.261829 containerd[1723]: 2025-11-08 00:06:19.222 [INFO][4886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.130/32] ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:19.261829 containerd[1723]: 2025-11-08 00:06:19.223 [INFO][4886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaee1f8ee43 ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:19.261829 containerd[1723]: 2025-11-08 00:06:19.232 [INFO][4886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:19.261829 containerd[1723]: 2025-11-08 00:06:19.233 [INFO][4886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"86741758-ba30-4cec-a95c-6af79e2546fe", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc", Pod:"calico-apiserver-59cf6d9fcc-nmv8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaee1f8ee43", MAC:"f2:6c:46:4a:cb:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:19.261829 containerd[1723]: 2025-11-08 00:06:19.258 [INFO][4886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-nmv8r" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:06:19.289245 containerd[1723]: time="2025-11-08T00:06:19.288862803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:19.289245 containerd[1723]: time="2025-11-08T00:06:19.288996763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:19.289245 containerd[1723]: time="2025-11-08T00:06:19.289029803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:19.289245 containerd[1723]: time="2025-11-08T00:06:19.289155683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:19.312988 systemd[1]: Started cri-containerd-549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc.scope - libcontainer container 549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc. Nov 8 00:06:19.337403 systemd-networkd[1447]: cali28dacc6b791: Link UP Nov 8 00:06:19.338233 systemd-networkd[1447]: cali28dacc6b791: Gained carrier Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.156 [INFO][4897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0 coredns-674b8bbfcf- kube-system b9108a90-054b-4178-a637-f4f5bb2138bc 968 0 2025-11-08 00:05:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d coredns-674b8bbfcf-664tp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28dacc6b791 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.160 [INFO][4897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.213 [INFO][4931] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" HandleID="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.214 [INFO][4931] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" HandleID="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"coredns-674b8bbfcf-664tp", "timestamp":"2025-11-08 00:06:19.213799635 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.214 [INFO][4931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.219 [INFO][4931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.219 [INFO][4931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.266 [INFO][4931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.276 [INFO][4931] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.288 [INFO][4931] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.293 [INFO][4931] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.297 [INFO][4931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.297 [INFO][4931] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.301 [INFO][4931] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.319 [INFO][4931] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.328 [INFO][4931] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.131/26] block=192.168.59.128/26 handle="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.328 [INFO][4931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.131/26] handle="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.328 [INFO][4931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:19.372385 containerd[1723]: 2025-11-08 00:06:19.328 [INFO][4931] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.131/26] IPv6=[] ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" HandleID="k8s-pod-network.4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.372955 containerd[1723]: 2025-11-08 00:06:19.333 [INFO][4897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9108a90-054b-4178-a637-f4f5bb2138bc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"coredns-674b8bbfcf-664tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dacc6b791", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:19.372955 containerd[1723]: 2025-11-08 00:06:19.333 [INFO][4897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.131/32] ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.372955 containerd[1723]: 2025-11-08 00:06:19.333 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28dacc6b791 ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.372955 containerd[1723]: 2025-11-08 00:06:19.338 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.372955 containerd[1723]: 2025-11-08 00:06:19.340 [INFO][4897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9108a90-054b-4178-a637-f4f5bb2138bc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f", Pod:"coredns-674b8bbfcf-664tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dacc6b791", MAC:"62:f2:dd:5a:3a:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:19.372955 containerd[1723]: 2025-11-08 00:06:19.366 [INFO][4897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f" Namespace="kube-system" Pod="coredns-674b8bbfcf-664tp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:06:19.380816 containerd[1723]: time="2025-11-08T00:06:19.380704813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-nmv8r,Uid:86741758-ba30-4cec-a95c-6af79e2546fe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc\"" Nov 8 00:06:19.386582 containerd[1723]: time="2025-11-08T00:06:19.386540134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:19.424601 containerd[1723]: time="2025-11-08T00:06:19.424430859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:19.424601 containerd[1723]: time="2025-11-08T00:06:19.424504779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:19.424601 containerd[1723]: time="2025-11-08T00:06:19.424534659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:19.424949 containerd[1723]: time="2025-11-08T00:06:19.424855139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:19.446948 systemd[1]: Started cri-containerd-4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f.scope - libcontainer container 4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f. Nov 8 00:06:19.450138 systemd-networkd[1447]: calib53db303ddd: Link UP Nov 8 00:06:19.450247 systemd-networkd[1447]: calib53db303ddd: Gained carrier Nov 8 00:06:19.485080 containerd[1723]: time="2025-11-08T00:06:19.484397588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-664tp,Uid:b9108a90-054b-4178-a637-f4f5bb2138bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f\"" Nov 8 00:06:19.498463 containerd[1723]: time="2025-11-08T00:06:19.498355990Z" level=info msg="CreateContainer within sandbox \"4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.189 [INFO][4912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0 coredns-674b8bbfcf- kube-system ef9b5a81-e6ae-4009-b58b-0441376d2cb7 970 0 2025-11-08 00:05:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d coredns-674b8bbfcf-5ctrp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib53db303ddd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.191 [INFO][4912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.249 [INFO][4936] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" HandleID="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.249 [INFO][4936] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" HandleID="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b370), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"coredns-674b8bbfcf-5ctrp", "timestamp":"2025-11-08 00:06:19.249491239 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.249 [INFO][4936] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.328 [INFO][4936] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.328 [INFO][4936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.373 [INFO][4936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.390 [INFO][4936] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.400 [INFO][4936] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.403 [INFO][4936] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.406 [INFO][4936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.406 [INFO][4936] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.408 [INFO][4936] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.416 [INFO][4936] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.435 [INFO][4936] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.132/26] block=192.168.59.128/26 handle="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.435 [INFO][4936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.132/26] handle="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.435 [INFO][4936] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:19.511920 containerd[1723]: 2025-11-08 00:06:19.435 [INFO][4936] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.132/26] IPv6=[] ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" HandleID="k8s-pod-network.39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.512456 containerd[1723]: 2025-11-08 00:06:19.438 [INFO][4912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef9b5a81-e6ae-4009-b58b-0441376d2cb7", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"coredns-674b8bbfcf-5ctrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib53db303ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:19.512456 containerd[1723]: 2025-11-08 00:06:19.440 [INFO][4912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.132/32] ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.512456 containerd[1723]: 2025-11-08 00:06:19.441 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib53db303ddd ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.512456 containerd[1723]: 2025-11-08 00:06:19.451 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.512456 containerd[1723]: 2025-11-08 00:06:19.452 [INFO][4912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef9b5a81-e6ae-4009-b58b-0441376d2cb7", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e", Pod:"coredns-674b8bbfcf-5ctrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib53db303ddd", MAC:"3a:a6:3e:bd:8d:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:19.512456 containerd[1723]: 2025-11-08 00:06:19.507 [INFO][4912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e" Namespace="kube-system" Pod="coredns-674b8bbfcf-5ctrp" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:06:19.537954 containerd[1723]: time="2025-11-08T00:06:19.537191356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:19.537954 containerd[1723]: time="2025-11-08T00:06:19.537243516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:19.537954 containerd[1723]: time="2025-11-08T00:06:19.537267076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:19.538155 containerd[1723]: time="2025-11-08T00:06:19.537359116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:19.556970 systemd[1]: Started cri-containerd-39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e.scope - libcontainer container 39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e. Nov 8 00:06:19.558440 containerd[1723]: time="2025-11-08T00:06:19.558270919Z" level=info msg="CreateContainer within sandbox \"4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8a2ec9d4f52dec2842b63f49df5fa81e50ad6963d63cfb2e6f7026a320b433a\"" Nov 8 00:06:19.559459 containerd[1723]: time="2025-11-08T00:06:19.559411279Z" level=info msg="StartContainer for \"c8a2ec9d4f52dec2842b63f49df5fa81e50ad6963d63cfb2e6f7026a320b433a\"" Nov 8 00:06:19.596016 systemd[1]: Started cri-containerd-c8a2ec9d4f52dec2842b63f49df5fa81e50ad6963d63cfb2e6f7026a320b433a.scope - libcontainer container c8a2ec9d4f52dec2842b63f49df5fa81e50ad6963d63cfb2e6f7026a320b433a. Nov 8 00:06:19.611303 containerd[1723]: time="2025-11-08T00:06:19.610956407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5ctrp,Uid:ef9b5a81-e6ae-4009-b58b-0441376d2cb7,Namespace:kube-system,Attempt:1,} returns sandbox id \"39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e\"" Nov 8 00:06:19.626385 containerd[1723]: time="2025-11-08T00:06:19.625962930Z" level=info msg="CreateContainer within sandbox \"39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:06:19.638965 containerd[1723]: time="2025-11-08T00:06:19.638907892Z" level=info msg="StartContainer for \"c8a2ec9d4f52dec2842b63f49df5fa81e50ad6963d63cfb2e6f7026a320b433a\" returns successfully" Nov 8 00:06:19.676016 containerd[1723]: time="2025-11-08T00:06:19.675974937Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:19.683770 containerd[1723]: time="2025-11-08T00:06:19.683652898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:19.684221 containerd[1723]: time="2025-11-08T00:06:19.683720018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:19.684221 containerd[1723]: time="2025-11-08T00:06:19.684071498Z" level=info msg="CreateContainer within sandbox \"39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fe350a3e6166a292911bd976a94f98d0cfbefc82730c43e18db3838f4b08853\"" Nov 8 00:06:19.685905 kubelet[3207]: E1108 00:06:19.684473 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:19.685905 kubelet[3207]: E1108 00:06:19.684527 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:19.685905 kubelet[3207]: E1108 00:06:19.684680 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phxrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-nmv8r_calico-apiserver(86741758-ba30-4cec-a95c-6af79e2546fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:19.685905 kubelet[3207]: E1108 00:06:19.685834 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:06:19.686419 containerd[1723]: time="2025-11-08T00:06:19.684947019Z" level=info msg="StartContainer for \"2fe350a3e6166a292911bd976a94f98d0cfbefc82730c43e18db3838f4b08853\"" Nov 8 00:06:19.714939 systemd[1]: Started cri-containerd-2fe350a3e6166a292911bd976a94f98d0cfbefc82730c43e18db3838f4b08853.scope - libcontainer container 2fe350a3e6166a292911bd976a94f98d0cfbefc82730c43e18db3838f4b08853. Nov 8 00:06:19.755012 containerd[1723]: time="2025-11-08T00:06:19.754832989Z" level=info msg="StartContainer for \"2fe350a3e6166a292911bd976a94f98d0cfbefc82730c43e18db3838f4b08853\" returns successfully" Nov 8 00:06:20.066113 kubelet[3207]: E1108 00:06:20.065681 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:06:20.087591 kubelet[3207]: I1108 00:06:20.086540 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5ctrp" podStartSLOduration=61.0865204 podStartE2EDuration="1m1.0865204s" podCreationTimestamp="2025-11-08 00:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:20.0859904 +0000 UTC m=+66.401883470" watchObservedRunningTime="2025-11-08 00:06:20.0865204 +0000 UTC m=+66.402413510" Nov 8 00:06:20.662952 systemd-networkd[1447]: cali28dacc6b791: Gained IPv6LL Nov 8 00:06:21.046923 systemd-networkd[1447]: calidaee1f8ee43: Gained IPv6LL Nov 8 00:06:21.073633 kubelet[3207]: E1108 00:06:21.072862 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:06:21.120907 kubelet[3207]: I1108 00:06:21.119573 3207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-664tp" podStartSLOduration=62.119551157 podStartE2EDuration="1m2.119551157s" podCreationTimestamp="2025-11-08 00:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:20.35359308 +0000 UTC m=+66.669486110" watchObservedRunningTime="2025-11-08 00:06:21.119551157 +0000 UTC m=+67.435444267" Nov 8 00:06:21.174955 systemd-networkd[1447]: calib53db303ddd: Gained IPv6LL Nov 8 00:06:21.808558 containerd[1723]: time="2025-11-08T00:06:21.808093582Z" level=info msg="StopPodSandbox for \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\"" Nov 8 00:06:21.811017 containerd[1723]: time="2025-11-08T00:06:21.809050302Z" level=info msg="StopPodSandbox for \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\"" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.869 [INFO][5202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.869 [INFO][5202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" iface="eth0" netns="/var/run/netns/cni-404d1fca-236c-dcfb-a57d-773341cb7cdf" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.870 [INFO][5202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" iface="eth0" netns="/var/run/netns/cni-404d1fca-236c-dcfb-a57d-773341cb7cdf" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.872 [INFO][5202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" iface="eth0" netns="/var/run/netns/cni-404d1fca-236c-dcfb-a57d-773341cb7cdf" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.872 [INFO][5202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.872 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.891 [INFO][5216] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.891 [INFO][5216] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.891 [INFO][5216] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.910 [WARNING][5216] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.910 [INFO][5216] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.916 [INFO][5216] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:21.918930 containerd[1723]: 2025-11-08 00:06:21.917 [INFO][5202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:06:21.921652 containerd[1723]: time="2025-11-08T00:06:21.919156798Z" level=info msg="TearDown network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\" successfully" Nov 8 00:06:21.921652 containerd[1723]: time="2025-11-08T00:06:21.920923639Z" level=info msg="StopPodSandbox for \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\" returns successfully" Nov 8 00:06:21.921652 containerd[1723]: time="2025-11-08T00:06:21.921539239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-69dnl,Uid:a62546c2-2f66-4002-b0c3-d54109c52a13,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:21.922522 systemd[1]: run-netns-cni\x2d404d1fca\x2d236c\x2ddcfb\x2da57d\x2d773341cb7cdf.mount: Deactivated successfully. Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.911 [INFO][5203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.911 [INFO][5203] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" iface="eth0" netns="/var/run/netns/cni-6b94807d-0c9c-c96c-dafd-4e9938fc1b3e" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.911 [INFO][5203] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" iface="eth0" netns="/var/run/netns/cni-6b94807d-0c9c-c96c-dafd-4e9938fc1b3e" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.912 [INFO][5203] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" iface="eth0" netns="/var/run/netns/cni-6b94807d-0c9c-c96c-dafd-4e9938fc1b3e" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.912 [INFO][5203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.912 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.939 [INFO][5223] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.939 [INFO][5223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.939 [INFO][5223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.964 [WARNING][5223] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.965 [INFO][5223] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.969 [INFO][5223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:21.976253 containerd[1723]: 2025-11-08 00:06:21.971 [INFO][5203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:06:21.976253 containerd[1723]: time="2025-11-08T00:06:21.973282767Z" level=info msg="TearDown network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\" successfully" Nov 8 00:06:21.976253 containerd[1723]: time="2025-11-08T00:06:21.973319247Z" level=info msg="StopPodSandbox for \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\" returns successfully" Nov 8 00:06:21.978039 containerd[1723]: time="2025-11-08T00:06:21.977096447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-xrwqb,Uid:b965bf6a-7220-4fbc-b608-85c677cf8e39,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:06:21.979055 systemd[1]: run-netns-cni\x2d6b94807d\x2d0c9c\x2dc96c\x2ddafd\x2d4e9938fc1b3e.mount: Deactivated successfully. Nov 8 00:06:22.161276 systemd-networkd[1447]: cali34b2f251a70: Link UP Nov 8 00:06:22.161401 systemd-networkd[1447]: cali34b2f251a70: Gained carrier Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.024 [INFO][5230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0 goldmane-666569f655- calico-system a62546c2-2f66-4002-b0c3-d54109c52a13 1020 0 2025-11-08 00:05:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d goldmane-666569f655-69dnl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali34b2f251a70 [] [] }} ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.024 [INFO][5230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.067 [INFO][5249] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" HandleID="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.067 [INFO][5249] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" HandleID="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"goldmane-666569f655-69dnl", "timestamp":"2025-11-08 00:06:22.067730661 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.068 [INFO][5249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.068 [INFO][5249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.068 [INFO][5249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.080 [INFO][5249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.088 [INFO][5249] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.094 [INFO][5249] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.096 [INFO][5249] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.100 [INFO][5249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.100 [INFO][5249] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.102 [INFO][5249] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373 Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.133 [INFO][5249] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.153 [INFO][5249] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.133/26] block=192.168.59.128/26 handle="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.153 [INFO][5249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.133/26] handle="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.153 [INFO][5249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:22.214454 containerd[1723]: 2025-11-08 00:06:22.153 [INFO][5249] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.133/26] IPv6=[] ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" HandleID="k8s-pod-network.2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:22.216690 containerd[1723]: 2025-11-08 00:06:22.157 [INFO][5230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a62546c2-2f66-4002-b0c3-d54109c52a13", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"goldmane-666569f655-69dnl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali34b2f251a70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:22.216690 containerd[1723]: 2025-11-08 00:06:22.157 [INFO][5230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.133/32] ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:22.216690 containerd[1723]: 2025-11-08 00:06:22.157 [INFO][5230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34b2f251a70 ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:22.216690 containerd[1723]: 2025-11-08 00:06:22.162 [INFO][5230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:22.216690 containerd[1723]: 2025-11-08 00:06:22.163 [INFO][5230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a62546c2-2f66-4002-b0c3-d54109c52a13", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373", Pod:"goldmane-666569f655-69dnl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali34b2f251a70", MAC:"f2:9e:f5:e1:14:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:22.216690 containerd[1723]: 2025-11-08 00:06:22.211 [INFO][5230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373" Namespace="calico-system" Pod="goldmane-666569f655-69dnl" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:06:22.245088 containerd[1723]: time="2025-11-08T00:06:22.244633448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:22.245392 containerd[1723]: time="2025-11-08T00:06:22.245245288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:22.245392 containerd[1723]: time="2025-11-08T00:06:22.245368928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:22.245548 containerd[1723]: time="2025-11-08T00:06:22.245488848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:22.265038 systemd[1]: Started cri-containerd-2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373.scope - libcontainer container 2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373. Nov 8 00:06:22.272646 systemd-networkd[1447]: cali9f37fbcae2e: Link UP Nov 8 00:06:22.276733 systemd-networkd[1447]: cali9f37fbcae2e: Gained carrier Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.071 [INFO][5240] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0 calico-apiserver-59cf6d9fcc- calico-apiserver b965bf6a-7220-4fbc-b608-85c677cf8e39 1021 0 2025-11-08 00:05:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59cf6d9fcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d calico-apiserver-59cf6d9fcc-xrwqb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f37fbcae2e [] [] }} ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.071 [INFO][5240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.113 [INFO][5259] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" HandleID="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.113 [INFO][5259] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" HandleID="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"calico-apiserver-59cf6d9fcc-xrwqb", "timestamp":"2025-11-08 00:06:22.113363348 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.113 [INFO][5259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.154 [INFO][5259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.154 [INFO][5259] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.201 [INFO][5259] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.217 [INFO][5259] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.223 [INFO][5259] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.227 [INFO][5259] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.231 [INFO][5259] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.231 [INFO][5259] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.233 [INFO][5259] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0 Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.241 [INFO][5259] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.262 [INFO][5259] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.134/26] block=192.168.59.128/26 handle="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.262 [INFO][5259] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.134/26] handle="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.262 [INFO][5259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:22.362874 containerd[1723]: 2025-11-08 00:06:22.263 [INFO][5259] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.134/26] IPv6=[] ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" HandleID="k8s-pod-network.37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:22.363458 containerd[1723]: 2025-11-08 00:06:22.268 [INFO][5240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b965bf6a-7220-4fbc-b608-85c677cf8e39", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"calico-apiserver-59cf6d9fcc-xrwqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f37fbcae2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:22.363458 containerd[1723]: 2025-11-08 00:06:22.268 [INFO][5240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.134/32] ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:22.363458 containerd[1723]: 2025-11-08 00:06:22.268 [INFO][5240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f37fbcae2e ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:22.363458 containerd[1723]: 2025-11-08 00:06:22.278 [INFO][5240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:22.363458 containerd[1723]: 2025-11-08 00:06:22.278 [INFO][5240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b965bf6a-7220-4fbc-b608-85c677cf8e39", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0", Pod:"calico-apiserver-59cf6d9fcc-xrwqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f37fbcae2e", MAC:"76:90:44:b7:10:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:22.363458 containerd[1723]: 2025-11-08 00:06:22.360 [INFO][5240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0" Namespace="calico-apiserver" Pod="calico-apiserver-59cf6d9fcc-xrwqb" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:06:22.410839 containerd[1723]: time="2025-11-08T00:06:22.410705913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-69dnl,Uid:a62546c2-2f66-4002-b0c3-d54109c52a13,Namespace:calico-system,Attempt:1,} returns sandbox id \"2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373\"" Nov 8 00:06:22.416027 containerd[1723]: time="2025-11-08T00:06:22.414141274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:22.416027 containerd[1723]: time="2025-11-08T00:06:22.414470834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:22.416027 containerd[1723]: time="2025-11-08T00:06:22.414569234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:22.416027 containerd[1723]: time="2025-11-08T00:06:22.414580714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:22.416027 containerd[1723]: time="2025-11-08T00:06:22.414662354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:22.442183 systemd[1]: Started cri-containerd-37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0.scope - libcontainer container 37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0. Nov 8 00:06:22.472305 containerd[1723]: time="2025-11-08T00:06:22.472264163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59cf6d9fcc-xrwqb,Uid:b965bf6a-7220-4fbc-b608-85c677cf8e39,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0\"" Nov 8 00:06:22.687558 containerd[1723]: time="2025-11-08T00:06:22.687422235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:22.690975 containerd[1723]: time="2025-11-08T00:06:22.690844956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:22.690975 containerd[1723]: time="2025-11-08T00:06:22.690903836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:22.691138 kubelet[3207]: E1108 00:06:22.691085 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:22.691677 kubelet[3207]: E1108 00:06:22.691133 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:22.691677 kubelet[3207]: E1108 00:06:22.691346 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5t89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-69dnl_calico-system(a62546c2-2f66-4002-b0c3-d54109c52a13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:22.692291 containerd[1723]: time="2025-11-08T00:06:22.691951676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:22.693362 kubelet[3207]: E1108 00:06:22.693299 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:22.808830 containerd[1723]: time="2025-11-08T00:06:22.808470294Z" level=info msg="StopPodSandbox for \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\"" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.863 [INFO][5373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.864 [INFO][5373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" iface="eth0" netns="/var/run/netns/cni-8cc752e5-9f8e-d484-90da-f96eab7d7ae7" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.864 [INFO][5373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" iface="eth0" netns="/var/run/netns/cni-8cc752e5-9f8e-d484-90da-f96eab7d7ae7" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.864 [INFO][5373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" iface="eth0" netns="/var/run/netns/cni-8cc752e5-9f8e-d484-90da-f96eab7d7ae7" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.865 [INFO][5373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.865 [INFO][5373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.883 [INFO][5380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.883 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.883 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.898 [WARNING][5380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.898 [INFO][5380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.900 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:22.903781 containerd[1723]: 2025-11-08 00:06:22.902 [INFO][5373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:06:22.904359 containerd[1723]: time="2025-11-08T00:06:22.903950668Z" level=info msg="TearDown network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\" successfully" Nov 8 00:06:22.904359 containerd[1723]: time="2025-11-08T00:06:22.903988068Z" level=info msg="StopPodSandbox for \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\" returns successfully" Nov 8 00:06:22.904975 containerd[1723]: time="2025-11-08T00:06:22.904944748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tffst,Uid:b8e7681d-343c-40ff-9257-cd6bf2941900,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:22.923395 systemd[1]: run-netns-cni\x2d8cc752e5\x2d9f8e\x2dd484\x2d90da\x2df96eab7d7ae7.mount: Deactivated successfully. Nov 8 00:06:22.957946 containerd[1723]: time="2025-11-08T00:06:22.957819836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:22.963013 containerd[1723]: time="2025-11-08T00:06:22.962862157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:22.963157 containerd[1723]: time="2025-11-08T00:06:22.963078597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:22.964431 kubelet[3207]: E1108 00:06:22.963299 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:22.964431 kubelet[3207]: E1108 00:06:22.963354 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:22.964431 kubelet[3207]: E1108 00:06:22.963496 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czxjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-xrwqb_calico-apiserver(b965bf6a-7220-4fbc-b608-85c677cf8e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:22.964704 kubelet[3207]: E1108 00:06:22.964664 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:06:23.086554 kubelet[3207]: E1108 00:06:23.085165 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:23.093123 systemd-networkd[1447]: cali7825efce6d4: Link UP Nov 8 00:06:23.094055 systemd-networkd[1447]: cali7825efce6d4: Gained carrier Nov 8 00:06:23.098498 kubelet[3207]: E1108 00:06:23.096996 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:22.984 [INFO][5387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0 csi-node-driver- calico-system b8e7681d-343c-40ff-9257-cd6bf2941900 1035 0 2025-11-08 00:05:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d csi-node-driver-tffst eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7825efce6d4 [] [] }} ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:22.984 [INFO][5387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.009 [INFO][5400] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" HandleID="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.010 [INFO][5400] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" HandleID="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"csi-node-driver-tffst", "timestamp":"2025-11-08 00:06:23.009929124 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.010 [INFO][5400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.010 [INFO][5400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.010 [INFO][5400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.020 [INFO][5400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.034 [INFO][5400] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.042 [INFO][5400] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.044 [INFO][5400] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.047 [INFO][5400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.047 [INFO][5400] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.049 [INFO][5400] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453 Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.056 [INFO][5400] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.077 [INFO][5400] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.135/26] block=192.168.59.128/26 handle="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.077 [INFO][5400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.135/26] handle="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.077 [INFO][5400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:23.168906 containerd[1723]: 2025-11-08 00:06:23.077 [INFO][5400] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.135/26] IPv6=[] ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" HandleID="k8s-pod-network.9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:23.171419 containerd[1723]: 2025-11-08 00:06:23.084 [INFO][5387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e7681d-343c-40ff-9257-cd6bf2941900", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"csi-node-driver-tffst", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7825efce6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:23.171419 containerd[1723]: 2025-11-08 00:06:23.084 [INFO][5387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.135/32] ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:23.171419 containerd[1723]: 2025-11-08 00:06:23.084 [INFO][5387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7825efce6d4 ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:23.171419 containerd[1723]: 2025-11-08 00:06:23.095 [INFO][5387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:23.171419 containerd[1723]: 2025-11-08 00:06:23.097 [INFO][5387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e7681d-343c-40ff-9257-cd6bf2941900", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453", Pod:"csi-node-driver-tffst", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7825efce6d4", MAC:"92:65:d5:c7:9c:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:23.171419 containerd[1723]: 2025-11-08 00:06:23.165 [INFO][5387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453" Namespace="calico-system" Pod="csi-node-driver-tffst" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:06:23.193891 containerd[1723]: time="2025-11-08T00:06:23.193674792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:23.194065 containerd[1723]: time="2025-11-08T00:06:23.193742072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:23.194065 containerd[1723]: time="2025-11-08T00:06:23.193937512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:23.194164 containerd[1723]: time="2025-11-08T00:06:23.194116152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:23.225950 systemd[1]: Started cri-containerd-9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453.scope - libcontainer container 9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453. Nov 8 00:06:23.254156 containerd[1723]: time="2025-11-08T00:06:23.254110281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tffst,Uid:b8e7681d-343c-40ff-9257-cd6bf2941900,Namespace:calico-system,Attempt:1,} returns sandbox id \"9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453\"" Nov 8 00:06:23.256387 containerd[1723]: time="2025-11-08T00:06:23.256339082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:06:23.350946 systemd-networkd[1447]: cali34b2f251a70: Gained IPv6LL Nov 8 00:06:23.548489 containerd[1723]: time="2025-11-08T00:06:23.548350166Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:23.552358 containerd[1723]: time="2025-11-08T00:06:23.552309647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:06:23.552446 containerd[1723]: time="2025-11-08T00:06:23.552419127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:06:23.552603 kubelet[3207]: E1108 00:06:23.552561 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:23.552651 kubelet[3207]: E1108 00:06:23.552614 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:23.552780 kubelet[3207]: E1108 00:06:23.552728 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:23.554832 containerd[1723]: time="2025-11-08T00:06:23.554797727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:06:23.810410 containerd[1723]: time="2025-11-08T00:06:23.810275046Z" level=info msg="StopPodSandbox for \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\"" Nov 8 00:06:23.827912 containerd[1723]: time="2025-11-08T00:06:23.827863209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:23.832128 containerd[1723]: time="2025-11-08T00:06:23.832066169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:06:23.832378 containerd[1723]: time="2025-11-08T00:06:23.832181329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:06:23.832414 kubelet[3207]: E1108 00:06:23.832306 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:23.832414 kubelet[3207]: E1108 00:06:23.832354 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:23.833812 kubelet[3207]: E1108 00:06:23.832471 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:23.834163 kubelet[3207]: E1108 00:06:23.834114 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.879 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.880 [INFO][5469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" iface="eth0" netns="/var/run/netns/cni-c2bcd684-19d6-fabb-f7b3-31c4c8735db2" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.880 [INFO][5469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" iface="eth0" netns="/var/run/netns/cni-c2bcd684-19d6-fabb-f7b3-31c4c8735db2" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.880 [INFO][5469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" iface="eth0" netns="/var/run/netns/cni-c2bcd684-19d6-fabb-f7b3-31c4c8735db2" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.881 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.881 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.903 [INFO][5476] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.903 [INFO][5476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.903 [INFO][5476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.912 [WARNING][5476] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.912 [INFO][5476] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.914 [INFO][5476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:23.917585 containerd[1723]: 2025-11-08 00:06:23.915 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:06:23.919358 containerd[1723]: time="2025-11-08T00:06:23.917799142Z" level=info msg="TearDown network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\" successfully" Nov 8 00:06:23.919358 containerd[1723]: time="2025-11-08T00:06:23.917827582Z" level=info msg="StopPodSandbox for \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\" returns successfully" Nov 8 00:06:23.919555 containerd[1723]: time="2025-11-08T00:06:23.919512263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f5cd74bd-vrdmh,Uid:ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:23.921575 systemd[1]: run-netns-cni\x2dc2bcd684\x2d19d6\x2dfabb\x2df7b3\x2d31c4c8735db2.mount: Deactivated successfully. Nov 8 00:06:24.094296 kubelet[3207]: E1108 00:06:24.094171 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:24.096104 kubelet[3207]: E1108 00:06:24.096023 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:24.096298 kubelet[3207]: E1108 00:06:24.096266 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:06:24.117210 systemd-networkd[1447]: cali2e298bea6bd: Link UP Nov 8 00:06:24.120911 systemd-networkd[1447]: cali2e298bea6bd: Gained carrier Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:23.991 [INFO][5483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0 calico-kube-controllers-55f5cd74bd- calico-system ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf 1061 0 2025-11-08 00:05:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55f5cd74bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-32f19bad4d calico-kube-controllers-55f5cd74bd-vrdmh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2e298bea6bd [] [] }} ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:23.991 [INFO][5483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.024 [INFO][5495] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" HandleID="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.024 [INFO][5495] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" HandleID="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-32f19bad4d", "pod":"calico-kube-controllers-55f5cd74bd-vrdmh", "timestamp":"2025-11-08 00:06:24.024478919 +0000 UTC"}, Hostname:"ci-4081.3.6-n-32f19bad4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.024 [INFO][5495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.024 [INFO][5495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.024 [INFO][5495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-32f19bad4d' Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.042 [INFO][5495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.050 [INFO][5495] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.055 [INFO][5495] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.057 [INFO][5495] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.060 [INFO][5495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.060 [INFO][5495] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.061 [INFO][5495] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9 Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.071 [INFO][5495] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.109 [INFO][5495] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.136/26] block=192.168.59.128/26 handle="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.109 [INFO][5495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.136/26] handle="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" host="ci-4081.3.6-n-32f19bad4d" Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.109 [INFO][5495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:24.171118 containerd[1723]: 2025-11-08 00:06:24.109 [INFO][5495] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.136/26] IPv6=[] ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" HandleID="k8s-pod-network.6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:24.172610 containerd[1723]: 2025-11-08 00:06:24.112 [INFO][5483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0", GenerateName:"calico-kube-controllers-55f5cd74bd-", Namespace:"calico-system", SelfLink:"", UID:"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f5cd74bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"", Pod:"calico-kube-controllers-55f5cd74bd-vrdmh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e298bea6bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:24.172610 containerd[1723]: 2025-11-08 00:06:24.112 [INFO][5483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.136/32] ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:24.172610 containerd[1723]: 2025-11-08 00:06:24.112 [INFO][5483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e298bea6bd ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:24.172610 containerd[1723]: 2025-11-08 00:06:24.116 [INFO][5483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:24.172610 containerd[1723]: 2025-11-08 00:06:24.116 [INFO][5483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0", GenerateName:"calico-kube-controllers-55f5cd74bd-", Namespace:"calico-system", SelfLink:"", UID:"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f5cd74bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9", Pod:"calico-kube-controllers-55f5cd74bd-vrdmh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e298bea6bd", MAC:"8e:b3:85:9c:b2:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:24.172610 containerd[1723]: 2025-11-08 00:06:24.166 [INFO][5483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9" Namespace="calico-system" Pod="calico-kube-controllers-55f5cd74bd-vrdmh" WorkloadEndpoint="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:06:24.202523 containerd[1723]: time="2025-11-08T00:06:24.202085946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:24.202523 containerd[1723]: time="2025-11-08T00:06:24.202151266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:24.202523 containerd[1723]: time="2025-11-08T00:06:24.202166626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.202523 containerd[1723]: time="2025-11-08T00:06:24.202264826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:24.233953 systemd[1]: Started cri-containerd-6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9.scope - libcontainer container 6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9. Nov 8 00:06:24.275313 containerd[1723]: time="2025-11-08T00:06:24.275272757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f5cd74bd-vrdmh,Uid:ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9\"" Nov 8 00:06:24.276965 containerd[1723]: time="2025-11-08T00:06:24.276930117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:24.310936 systemd-networkd[1447]: cali9f37fbcae2e: Gained IPv6LL Nov 8 00:06:24.560633 containerd[1723]: time="2025-11-08T00:06:24.560527560Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:24.570041 containerd[1723]: time="2025-11-08T00:06:24.569909882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:24.570041 containerd[1723]: time="2025-11-08T00:06:24.569980162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:24.570199 kubelet[3207]: E1108 00:06:24.570156 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:24.570245 kubelet[3207]: E1108 00:06:24.570209 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:24.570393 kubelet[3207]: E1108 00:06:24.570335 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnwsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55f5cd74bd-vrdmh_calico-system(ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:24.571956 kubelet[3207]: E1108 00:06:24.571812 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:06:24.630996 systemd-networkd[1447]: cali7825efce6d4: Gained IPv6LL Nov 8 00:06:25.097576 kubelet[3207]: E1108 00:06:25.097333 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:06:25.099224 kubelet[3207]: E1108 00:06:25.098454 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:25.143883 systemd-networkd[1447]: cali2e298bea6bd: Gained IPv6LL Nov 8 00:06:26.098690 kubelet[3207]: E1108 00:06:26.097957 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:06:27.083651 update_engine[1679]: I20251108 00:06:27.083134 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:06:27.083651 update_engine[1679]: I20251108 00:06:27.083401 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:06:27.083651 update_engine[1679]: I20251108 00:06:27.083600 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:06:27.122631 update_engine[1679]: E20251108 00:06:27.122505 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:06:27.122631 update_engine[1679]: I20251108 00:06:27.122592 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 8 00:06:30.810786 containerd[1723]: time="2025-11-08T00:06:30.810381327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:06:31.077347 containerd[1723]: time="2025-11-08T00:06:31.077214337Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:31.081608 containerd[1723]: time="2025-11-08T00:06:31.081567222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:06:31.081715 containerd[1723]: time="2025-11-08T00:06:31.081666782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:06:31.081853 kubelet[3207]: E1108 00:06:31.081813 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:31.082139 kubelet[3207]: E1108 00:06:31.081862 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:31.082139 kubelet[3207]: E1108 00:06:31.081972 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41c4d131d342475c83c899a448c04516,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:31.086561 containerd[1723]: time="2025-11-08T00:06:31.085992066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:06:31.361362 containerd[1723]: time="2025-11-08T00:06:31.361245085Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:31.364697 containerd[1723]: time="2025-11-08T00:06:31.364601089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:06:31.364697 containerd[1723]: time="2025-11-08T00:06:31.364672529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:31.365540 kubelet[3207]: E1108 00:06:31.364905 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:31.365540 kubelet[3207]: E1108 00:06:31.364955 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:31.365540 kubelet[3207]: E1108 00:06:31.365068 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:31.366557 kubelet[3207]: E1108 00:06:31.366480 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:06:34.809330 containerd[1723]: time="2025-11-08T00:06:34.809294791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:35.083852 containerd[1723]: time="2025-11-08T00:06:35.083573729Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:35.087760 containerd[1723]: time="2025-11-08T00:06:35.087652213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:35.087760 containerd[1723]: time="2025-11-08T00:06:35.087715933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:35.087910 kubelet[3207]: E1108 00:06:35.087860 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:35.088233 kubelet[3207]: E1108 00:06:35.087914 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:35.088233 kubelet[3207]: E1108 00:06:35.088112 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5t89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-69dnl_calico-system(a62546c2-2f66-4002-b0c3-d54109c52a13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:35.088786 containerd[1723]: time="2025-11-08T00:06:35.088733095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:35.089693 kubelet[3207]: E1108 00:06:35.089646 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:35.406142 containerd[1723]: time="2025-11-08T00:06:35.405981599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:35.408927 containerd[1723]: time="2025-11-08T00:06:35.408841642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:35.409009 containerd[1723]: time="2025-11-08T00:06:35.408935642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:35.409124 kubelet[3207]: E1108 00:06:35.409085 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:35.409169 kubelet[3207]: E1108 00:06:35.409136 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:35.409367 kubelet[3207]: E1108 00:06:35.409327 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czxjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-xrwqb_calico-apiserver(b965bf6a-7220-4fbc-b608-85c677cf8e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:35.409859 containerd[1723]: time="2025-11-08T00:06:35.409651243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:35.410907 kubelet[3207]: E1108 00:06:35.410869 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:06:35.677310 containerd[1723]: time="2025-11-08T00:06:35.677187629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:35.680103 containerd[1723]: time="2025-11-08T00:06:35.680059045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:35.680241 containerd[1723]: time="2025-11-08T00:06:35.680078925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:35.680295 kubelet[3207]: E1108 00:06:35.680266 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:35.680387 kubelet[3207]: E1108 00:06:35.680311 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:35.680484 kubelet[3207]: E1108 00:06:35.680438 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phxrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-nmv8r_calico-apiserver(86741758-ba30-4cec-a95c-6af79e2546fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:35.681855 kubelet[3207]: E1108 00:06:35.681588 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:06:36.808094 containerd[1723]: time="2025-11-08T00:06:36.807898327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:06:37.072694 containerd[1723]: time="2025-11-08T00:06:37.072569102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:37.077994 containerd[1723]: time="2025-11-08T00:06:37.077933411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:06:37.078121 containerd[1723]: time="2025-11-08T00:06:37.078048852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:06:37.078319 kubelet[3207]: E1108 00:06:37.078270 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:37.078741 kubelet[3207]: E1108 00:06:37.078323 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:37.078741 kubelet[3207]: E1108 00:06:37.078446 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:37.081065 containerd[1723]: time="2025-11-08T00:06:37.080976588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:06:37.090624 update_engine[1679]: I20251108 00:06:37.090555 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:06:37.091367 update_engine[1679]: I20251108 00:06:37.091089 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:06:37.091367 update_engine[1679]: I20251108 00:06:37.091320 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:06:37.194397 update_engine[1679]: E20251108 00:06:37.194053 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194144 1679 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194154 1679 omaha_request_action.cc:617] Omaha request response: Nov 8 00:06:37.194397 update_engine[1679]: E20251108 00:06:37.194243 1679 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194259 1679 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194265 1679 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194269 1679 update_attempter.cc:306] Processing Done. Nov 8 00:06:37.194397 update_engine[1679]: E20251108 00:06:37.194284 1679 update_attempter.cc:619] Update failed. Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194291 1679 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194296 1679 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 8 00:06:37.194397 update_engine[1679]: I20251108 00:06:37.194301 1679 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 8 00:06:37.194713 update_engine[1679]: I20251108 00:06:37.194420 1679 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:06:37.194713 update_engine[1679]: I20251108 00:06:37.194448 1679 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:06:37.194713 update_engine[1679]: I20251108 00:06:37.194458 1679 omaha_request_action.cc:272] Request: Nov 8 00:06:37.194713 update_engine[1679]: Nov 8 00:06:37.194713 update_engine[1679]: Nov 8 00:06:37.194713 update_engine[1679]: Nov 8 00:06:37.194713 update_engine[1679]: Nov 8 00:06:37.194713 update_engine[1679]: Nov 8 00:06:37.194713 update_engine[1679]: Nov 8 00:06:37.194713 update_engine[1679]: I20251108 00:06:37.194463 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:06:37.194713 update_engine[1679]: I20251108 00:06:37.194608 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:06:37.194945 update_engine[1679]: I20251108 00:06:37.194834 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:06:37.195111 locksmithd[1791]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 8 00:06:37.210387 update_engine[1679]: E20251108 00:06:37.210335 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:06:37.210512 update_engine[1679]: I20251108 00:06:37.210415 1679 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 8 00:06:37.210512 update_engine[1679]: I20251108 00:06:37.210425 1679 omaha_request_action.cc:617] Omaha request response: Nov 8 00:06:37.210512 update_engine[1679]: I20251108 00:06:37.210433 1679 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:06:37.210512 update_engine[1679]: I20251108 00:06:37.210438 1679 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:06:37.210512 update_engine[1679]: I20251108 00:06:37.210441 1679 update_attempter.cc:306] Processing Done. Nov 8 00:06:37.210512 update_engine[1679]: I20251108 00:06:37.210448 1679 update_attempter.cc:310] Error event sent. Nov 8 00:06:37.210512 update_engine[1679]: I20251108 00:06:37.210456 1679 update_check_scheduler.cc:74] Next update check in 43m23s Nov 8 00:06:37.210982 locksmithd[1791]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 8 00:06:37.359185 containerd[1723]: time="2025-11-08T00:06:37.359045517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:37.362836 containerd[1723]: time="2025-11-08T00:06:37.362789458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:06:37.362922 containerd[1723]: time="2025-11-08T00:06:37.362898338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:06:37.363107 kubelet[3207]: E1108 00:06:37.363066 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:37.363184 kubelet[3207]: E1108 00:06:37.363117 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:37.363301 kubelet[3207]: E1108 00:06:37.363244 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:37.364626 kubelet[3207]: E1108 00:06:37.364570 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:38.807826 containerd[1723]: time="2025-11-08T00:06:38.807590762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:39.096844 containerd[1723]: time="2025-11-08T00:06:39.096437315Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:39.099405 containerd[1723]: time="2025-11-08T00:06:39.099305476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:39.099510 containerd[1723]: time="2025-11-08T00:06:39.099409836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:39.099587 kubelet[3207]: E1108 00:06:39.099543 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:39.099867 kubelet[3207]: E1108 00:06:39.099599 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:39.099867 kubelet[3207]: E1108 00:06:39.099729 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnwsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55f5cd74bd-vrdmh_calico-system(ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:39.101775 kubelet[3207]: E1108 00:06:39.101709 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:06:45.811519 kubelet[3207]: E1108 00:06:45.810331 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:46.810422 kubelet[3207]: E1108 00:06:46.810301 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:06:47.809124 kubelet[3207]: E1108 00:06:47.809081 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:06:49.809600 kubelet[3207]: E1108 00:06:49.809276 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:06:51.811467 kubelet[3207]: E1108 00:06:51.811343 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:06:53.811374 kubelet[3207]: E1108 00:06:53.809942 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:06:58.808684 containerd[1723]: time="2025-11-08T00:06:58.808639659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:59.082112 containerd[1723]: time="2025-11-08T00:06:59.081970434Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:59.087217 containerd[1723]: time="2025-11-08T00:06:59.087115876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:59.087217 containerd[1723]: time="2025-11-08T00:06:59.087182636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:59.087405 kubelet[3207]: E1108 00:06:59.087349 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:59.087405 kubelet[3207]: E1108 00:06:59.087399 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:59.087693 kubelet[3207]: E1108 00:06:59.087539 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5t89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-69dnl_calico-system(a62546c2-2f66-4002-b0c3-d54109c52a13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:59.089080 kubelet[3207]: E1108 00:06:59.088963 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:06:59.813653 containerd[1723]: time="2025-11-08T00:06:59.812448485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:07:00.099733 containerd[1723]: time="2025-11-08T00:07:00.099599320Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:00.103930 containerd[1723]: time="2025-11-08T00:07:00.103870208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:07:00.104031 containerd[1723]: time="2025-11-08T00:07:00.104000727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:07:00.104312 kubelet[3207]: E1108 00:07:00.104169 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:00.106917 kubelet[3207]: E1108 00:07:00.104216 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:00.107634 containerd[1723]: time="2025-11-08T00:07:00.107435181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:00.107708 kubelet[3207]: E1108 00:07:00.107519 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41c4d131d342475c83c899a448c04516,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:00.392585 containerd[1723]: time="2025-11-08T00:07:00.392445993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:00.395709 containerd[1723]: time="2025-11-08T00:07:00.395642368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:00.395845 containerd[1723]: time="2025-11-08T00:07:00.395752528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:00.396048 kubelet[3207]: E1108 00:07:00.396009 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:00.396123 kubelet[3207]: E1108 00:07:00.396061 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:00.396845 containerd[1723]: time="2025-11-08T00:07:00.396533602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:07:00.396936 kubelet[3207]: E1108 00:07:00.396415 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phxrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-nmv8r_calico-apiserver(86741758-ba30-4cec-a95c-6af79e2546fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:00.398758 kubelet[3207]: E1108 00:07:00.398704 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:07:00.665772 containerd[1723]: time="2025-11-08T00:07:00.665585813Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:00.670483 containerd[1723]: time="2025-11-08T00:07:00.670373654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:07:00.670483 containerd[1723]: time="2025-11-08T00:07:00.670429414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:00.671105 kubelet[3207]: E1108 00:07:00.670584 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:00.671105 kubelet[3207]: E1108 00:07:00.670632 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:00.671105 kubelet[3207]: E1108 00:07:00.670772 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:00.672408 kubelet[3207]: E1108 00:07:00.672361 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:07:00.808466 containerd[1723]: time="2025-11-08T00:07:00.808242737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:01.097465 containerd[1723]: time="2025-11-08T00:07:01.097340704Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:01.103790 containerd[1723]: time="2025-11-08T00:07:01.103722544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:01.104093 containerd[1723]: time="2025-11-08T00:07:01.103967344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:01.104315 kubelet[3207]: E1108 00:07:01.104263 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:01.104404 kubelet[3207]: E1108 00:07:01.104321 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:01.105147 kubelet[3207]: E1108 00:07:01.104454 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czxjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-xrwqb_calico-apiserver(b965bf6a-7220-4fbc-b608-85c677cf8e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:01.107218 kubelet[3207]: E1108 00:07:01.106343 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:07:04.810724 containerd[1723]: time="2025-11-08T00:07:04.810672311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:07:05.082232 containerd[1723]: time="2025-11-08T00:07:05.082093797Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:05.085451 containerd[1723]: time="2025-11-08T00:07:05.085341077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:07:05.085451 containerd[1723]: time="2025-11-08T00:07:05.085402997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:07:05.085634 kubelet[3207]: E1108 00:07:05.085569 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:05.085634 kubelet[3207]: E1108 00:07:05.085622 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:05.085955 kubelet[3207]: E1108 00:07:05.085806 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:05.089422 containerd[1723]: time="2025-11-08T00:07:05.089153997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:07:05.366862 containerd[1723]: time="2025-11-08T00:07:05.365307644Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:05.368764 containerd[1723]: time="2025-11-08T00:07:05.368694644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:07:05.368894 containerd[1723]: time="2025-11-08T00:07:05.368817084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:07:05.369057 kubelet[3207]: E1108 00:07:05.369011 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:05.369308 kubelet[3207]: E1108 00:07:05.369186 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:05.370180 kubelet[3207]: E1108 00:07:05.369399 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:05.371497 kubelet[3207]: E1108 00:07:05.371429 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:07:07.809853 containerd[1723]: time="2025-11-08T00:07:07.808567392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:07:08.080849 containerd[1723]: time="2025-11-08T00:07:08.079303708Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:08.083816 containerd[1723]: time="2025-11-08T00:07:08.083705269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:07:08.083816 containerd[1723]: time="2025-11-08T00:07:08.083774389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:08.084332 kubelet[3207]: E1108 00:07:08.084068 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:08.084332 kubelet[3207]: E1108 00:07:08.084144 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:08.084332 kubelet[3207]: E1108 00:07:08.084286 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnwsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55f5cd74bd-vrdmh_calico-system(ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:08.085896 kubelet[3207]: E1108 00:07:08.085862 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:07:10.810691 kubelet[3207]: E1108 00:07:10.809982 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:07:11.808799 kubelet[3207]: E1108 00:07:11.807953 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:07:13.816610 kubelet[3207]: E1108 00:07:13.816220 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:07:14.032149 containerd[1723]: time="2025-11-08T00:07:14.031929444Z" level=info msg="StopPodSandbox for \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\"" Nov 8 00:07:14.080063 systemd[1]: run-containerd-runc-k8s.io-710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238-runc.yfiNXy.mount: Deactivated successfully. Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.126 [WARNING][5619] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef9b5a81-e6ae-4009-b58b-0441376d2cb7", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e", Pod:"coredns-674b8bbfcf-5ctrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib53db303ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.126 [INFO][5619] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.126 [INFO][5619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" iface="eth0" netns="" Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.126 [INFO][5619] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.126 [INFO][5619] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.188 [INFO][5646] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.190 [INFO][5646] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.190 [INFO][5646] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.209 [WARNING][5646] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.209 [INFO][5646] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.222 [INFO][5646] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.225789 containerd[1723]: 2025-11-08 00:07:14.223 [INFO][5619] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.226298 containerd[1723]: time="2025-11-08T00:07:14.226270213Z" level=info msg="TearDown network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\" successfully" Nov 8 00:07:14.226363 containerd[1723]: time="2025-11-08T00:07:14.226350013Z" level=info msg="StopPodSandbox for \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\" returns successfully" Nov 8 00:07:14.227056 containerd[1723]: time="2025-11-08T00:07:14.227023973Z" level=info msg="RemovePodSandbox for \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\"" Nov 8 00:07:14.227146 containerd[1723]: time="2025-11-08T00:07:14.227062213Z" level=info msg="Forcibly stopping sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\"" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.281 [WARNING][5661] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef9b5a81-e6ae-4009-b58b-0441376d2cb7", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"39ab74af0a2025a46330116cfb0feb600c92f6fd00919bdaf18cf44c1969c28e", Pod:"coredns-674b8bbfcf-5ctrp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib53db303ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.282 [INFO][5661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.282 [INFO][5661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" iface="eth0" netns="" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.282 [INFO][5661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.282 [INFO][5661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.318 [INFO][5668] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.319 [INFO][5668] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.319 [INFO][5668] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.329 [WARNING][5668] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.329 [INFO][5668] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" HandleID="k8s-pod-network.bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--5ctrp-eth0" Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.331 [INFO][5668] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.336723 containerd[1723]: 2025-11-08 00:07:14.334 [INFO][5661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268" Nov 8 00:07:14.336723 containerd[1723]: time="2025-11-08T00:07:14.336568498Z" level=info msg="TearDown network for sandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\" successfully" Nov 8 00:07:14.347321 containerd[1723]: time="2025-11-08T00:07:14.347255018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:14.347321 containerd[1723]: time="2025-11-08T00:07:14.347324818Z" level=info msg="RemovePodSandbox \"bc2ad4903e7ca8cdc6e2c364c5ee9fad5c1bd303cbcc15c9d91290f6a1e27268\" returns successfully" Nov 8 00:07:14.348791 containerd[1723]: time="2025-11-08T00:07:14.347896018Z" level=info msg="StopPodSandbox for \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\"" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.396 [WARNING][5682] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0", GenerateName:"calico-kube-controllers-55f5cd74bd-", Namespace:"calico-system", SelfLink:"", UID:"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f5cd74bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9", Pod:"calico-kube-controllers-55f5cd74bd-vrdmh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e298bea6bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.396 [INFO][5682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.396 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" iface="eth0" netns="" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.396 [INFO][5682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.396 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.427 [INFO][5689] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.427 [INFO][5689] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.428 [INFO][5689] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.442 [WARNING][5689] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.442 [INFO][5689] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.444 [INFO][5689] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.447797 containerd[1723]: 2025-11-08 00:07:14.446 [INFO][5682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.448499 containerd[1723]: time="2025-11-08T00:07:14.447841422Z" level=info msg="TearDown network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\" successfully" Nov 8 00:07:14.448499 containerd[1723]: time="2025-11-08T00:07:14.447874422Z" level=info msg="StopPodSandbox for \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\" returns successfully" Nov 8 00:07:14.449932 containerd[1723]: time="2025-11-08T00:07:14.449413582Z" level=info msg="RemovePodSandbox for \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\"" Nov 8 00:07:14.449932 containerd[1723]: time="2025-11-08T00:07:14.449450342Z" level=info msg="Forcibly stopping sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\"" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.501 [WARNING][5703] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0", GenerateName:"calico-kube-controllers-55f5cd74bd-", Namespace:"calico-system", SelfLink:"", UID:"ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f5cd74bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"6f3d13ea87b11a520414b405eee20f68d45244004f7939833ca91c90dccb57b9", Pod:"calico-kube-controllers-55f5cd74bd-vrdmh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2e298bea6bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.501 [INFO][5703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.501 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" iface="eth0" netns="" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.501 [INFO][5703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.501 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.526 [INFO][5710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.526 [INFO][5710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.526 [INFO][5710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.535 [WARNING][5710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.536 [INFO][5710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" HandleID="k8s-pod-network.a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--kube--controllers--55f5cd74bd--vrdmh-eth0" Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.537 [INFO][5710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.541137 containerd[1723]: 2025-11-08 00:07:14.539 [INFO][5703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7" Nov 8 00:07:14.542838 containerd[1723]: time="2025-11-08T00:07:14.541658906Z" level=info msg="TearDown network for sandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\" successfully" Nov 8 00:07:14.550410 containerd[1723]: time="2025-11-08T00:07:14.550243907Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:14.550553 containerd[1723]: time="2025-11-08T00:07:14.550531067Z" level=info msg="RemovePodSandbox \"a980bc8a0120f542692653b9cc481a7a8be0599d4fb849823f2add51a192f4e7\" returns successfully" Nov 8 00:07:14.551102 containerd[1723]: time="2025-11-08T00:07:14.551074947Z" level=info msg="StopPodSandbox for \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\"" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.584 [WARNING][5724] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a62546c2-2f66-4002-b0c3-d54109c52a13", ResourceVersion:"1274", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373", Pod:"goldmane-666569f655-69dnl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali34b2f251a70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.585 [INFO][5724] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.585 [INFO][5724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" iface="eth0" netns="" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.585 [INFO][5724] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.585 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.603 [INFO][5731] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.603 [INFO][5731] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.603 [INFO][5731] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.613 [WARNING][5731] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.613 [INFO][5731] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.615 [INFO][5731] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.618487 containerd[1723]: 2025-11-08 00:07:14.616 [INFO][5724] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.619930 containerd[1723]: time="2025-11-08T00:07:14.619832030Z" level=info msg="TearDown network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\" successfully" Nov 8 00:07:14.619930 containerd[1723]: time="2025-11-08T00:07:14.619886190Z" level=info msg="StopPodSandbox for \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\" returns successfully" Nov 8 00:07:14.621203 containerd[1723]: time="2025-11-08T00:07:14.621092230Z" level=info msg="RemovePodSandbox for \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\"" Nov 8 00:07:14.621203 containerd[1723]: time="2025-11-08T00:07:14.621146670Z" level=info msg="Forcibly stopping sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\"" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.658 [WARNING][5745] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a62546c2-2f66-4002-b0c3-d54109c52a13", ResourceVersion:"1274", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"2c770d1e296c9c97b853b67572d2e95e37ce79658aa00843b2d9ae2a29fd2373", Pod:"goldmane-666569f655-69dnl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali34b2f251a70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.659 [INFO][5745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.659 [INFO][5745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" iface="eth0" netns="" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.659 [INFO][5745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.659 [INFO][5745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.681 [INFO][5752] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.681 [INFO][5752] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.681 [INFO][5752] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.691 [WARNING][5752] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.692 [INFO][5752] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" HandleID="k8s-pod-network.bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Workload="ci--4081.3.6--n--32f19bad4d-k8s-goldmane--666569f655--69dnl-eth0" Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.695 [INFO][5752] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.699693 containerd[1723]: 2025-11-08 00:07:14.697 [INFO][5745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916" Nov 8 00:07:14.701180 containerd[1723]: time="2025-11-08T00:07:14.699679193Z" level=info msg="TearDown network for sandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\" successfully" Nov 8 00:07:14.708896 containerd[1723]: time="2025-11-08T00:07:14.708836154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:14.709116 containerd[1723]: time="2025-11-08T00:07:14.709097354Z" level=info msg="RemovePodSandbox \"bf6933743f337ca93cf9848e2cddcc6de1a857341e944134024b098763269916\" returns successfully" Nov 8 00:07:14.709735 containerd[1723]: time="2025-11-08T00:07:14.709706234Z" level=info msg="StopPodSandbox for \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\"" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.758 [WARNING][5766] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9108a90-054b-4178-a637-f4f5bb2138bc", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f", Pod:"coredns-674b8bbfcf-664tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dacc6b791", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.758 [INFO][5766] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.758 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" iface="eth0" netns="" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.758 [INFO][5766] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.758 [INFO][5766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.783 [INFO][5773] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.783 [INFO][5773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.783 [INFO][5773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.793 [WARNING][5773] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.793 [INFO][5773] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.795 [INFO][5773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.798612 containerd[1723]: 2025-11-08 00:07:14.796 [INFO][5766] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.799354 containerd[1723]: time="2025-11-08T00:07:14.798653517Z" level=info msg="TearDown network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\" successfully" Nov 8 00:07:14.799354 containerd[1723]: time="2025-11-08T00:07:14.798680197Z" level=info msg="StopPodSandbox for \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\" returns successfully" Nov 8 00:07:14.799354 containerd[1723]: time="2025-11-08T00:07:14.799184917Z" level=info msg="RemovePodSandbox for \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\"" Nov 8 00:07:14.799354 containerd[1723]: time="2025-11-08T00:07:14.799209717Z" level=info msg="Forcibly stopping sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\"" Nov 8 00:07:14.812857 kubelet[3207]: E1108 00:07:14.812481 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.924 [WARNING][5787] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9108a90-054b-4178-a637-f4f5bb2138bc", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"4b3b335c96cead89f7d8f52ba92ae416ca3e67cd3b0459166bf7cbe7b1714f8f", Pod:"coredns-674b8bbfcf-664tp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28dacc6b791", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.925 [INFO][5787] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.925 [INFO][5787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" iface="eth0" netns="" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.925 [INFO][5787] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.925 [INFO][5787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.949 [INFO][5794] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.949 [INFO][5794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.950 [INFO][5794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.962 [WARNING][5794] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.962 [INFO][5794] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" HandleID="k8s-pod-network.c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Workload="ci--4081.3.6--n--32f19bad4d-k8s-coredns--674b8bbfcf--664tp-eth0" Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.964 [INFO][5794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:14.968198 containerd[1723]: 2025-11-08 00:07:14.965 [INFO][5787] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217" Nov 8 00:07:14.969630 containerd[1723]: time="2025-11-08T00:07:14.968245725Z" level=info msg="TearDown network for sandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\" successfully" Nov 8 00:07:14.977696 containerd[1723]: time="2025-11-08T00:07:14.977635725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:14.977852 containerd[1723]: time="2025-11-08T00:07:14.977715485Z" level=info msg="RemovePodSandbox \"c2e074d6e0a247772e5cffc0590e619e8d00680537fd8d9206160da872f20217\" returns successfully" Nov 8 00:07:14.978641 containerd[1723]: time="2025-11-08T00:07:14.978311885Z" level=info msg="StopPodSandbox for \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\"" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.015 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e7681d-343c-40ff-9257-cd6bf2941900", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453", Pod:"csi-node-driver-tffst", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7825efce6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.015 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.015 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" iface="eth0" netns="" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.015 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.016 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.037 [INFO][5815] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.037 [INFO][5815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.037 [INFO][5815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.052 [WARNING][5815] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.052 [INFO][5815] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.054 [INFO][5815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:15.061381 containerd[1723]: 2025-11-08 00:07:15.057 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.064322 containerd[1723]: time="2025-11-08T00:07:15.062431009Z" level=info msg="TearDown network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\" successfully" Nov 8 00:07:15.064322 containerd[1723]: time="2025-11-08T00:07:15.063806129Z" level=info msg="StopPodSandbox for \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\" returns successfully" Nov 8 00:07:15.064322 containerd[1723]: time="2025-11-08T00:07:15.064283169Z" level=info msg="RemovePodSandbox for \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\"" Nov 8 00:07:15.064322 containerd[1723]: time="2025-11-08T00:07:15.064316809Z" level=info msg="Forcibly stopping sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\"" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.114 [WARNING][5830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8e7681d-343c-40ff-9257-cd6bf2941900", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"9f8560a91db30cbe631f3b9dc2bf11b8db203e01c05d2a0c18fc1aec13bb3453", Pod:"csi-node-driver-tffst", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7825efce6d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.115 [INFO][5830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.115 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" iface="eth0" netns="" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.115 [INFO][5830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.115 [INFO][5830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.142 [INFO][5837] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.143 [INFO][5837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.143 [INFO][5837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.156 [WARNING][5837] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.156 [INFO][5837] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" HandleID="k8s-pod-network.2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Workload="ci--4081.3.6--n--32f19bad4d-k8s-csi--node--driver--tffst-eth0" Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.161 [INFO][5837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:15.164551 containerd[1723]: 2025-11-08 00:07:15.162 [INFO][5830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e" Nov 8 00:07:15.166793 containerd[1723]: time="2025-11-08T00:07:15.165862653Z" level=info msg="TearDown network for sandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\" successfully" Nov 8 00:07:15.179649 containerd[1723]: time="2025-11-08T00:07:15.179410134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:15.179649 containerd[1723]: time="2025-11-08T00:07:15.179482214Z" level=info msg="RemovePodSandbox \"2567bef3bedff9c43797aa7be2722721f1102d2b31162a3c19fe6e59a776837e\" returns successfully" Nov 8 00:07:15.180322 containerd[1723]: time="2025-11-08T00:07:15.179934774Z" level=info msg="StopPodSandbox for \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\"" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.230 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b965bf6a-7220-4fbc-b608-85c677cf8e39", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0", Pod:"calico-apiserver-59cf6d9fcc-xrwqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f37fbcae2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.230 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.230 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" iface="eth0" netns="" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.230 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.230 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.265 [INFO][5859] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.268 [INFO][5859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.268 [INFO][5859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.280 [WARNING][5859] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.280 [INFO][5859] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.282 [INFO][5859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:15.287624 containerd[1723]: 2025-11-08 00:07:15.284 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.287624 containerd[1723]: time="2025-11-08T00:07:15.287529898Z" level=info msg="TearDown network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\" successfully" Nov 8 00:07:15.287624 containerd[1723]: time="2025-11-08T00:07:15.287554858Z" level=info msg="StopPodSandbox for \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\" returns successfully" Nov 8 00:07:15.290975 containerd[1723]: time="2025-11-08T00:07:15.290931299Z" level=info msg="RemovePodSandbox for \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\"" Nov 8 00:07:15.290975 containerd[1723]: time="2025-11-08T00:07:15.290970259Z" level=info msg="Forcibly stopping sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\"" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.345 [WARNING][5873] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b965bf6a-7220-4fbc-b608-85c677cf8e39", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"37568d6cb8f9cece454dc02a45c8cff46fe630c7d6bc65b931d90f091534c4f0", Pod:"calico-apiserver-59cf6d9fcc-xrwqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f37fbcae2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.346 [INFO][5873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.346 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" iface="eth0" netns="" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.346 [INFO][5873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.346 [INFO][5873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.368 [INFO][5880] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.368 [INFO][5880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.368 [INFO][5880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.377 [WARNING][5880] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.377 [INFO][5880] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" HandleID="k8s-pod-network.177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--xrwqb-eth0" Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.382 [INFO][5880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:15.387682 containerd[1723]: 2025-11-08 00:07:15.384 [INFO][5873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930" Nov 8 00:07:15.388243 containerd[1723]: time="2025-11-08T00:07:15.387739383Z" level=info msg="TearDown network for sandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\" successfully" Nov 8 00:07:15.405633 containerd[1723]: time="2025-11-08T00:07:15.405577023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:15.405878 containerd[1723]: time="2025-11-08T00:07:15.405687623Z" level=info msg="RemovePodSandbox \"177bbe92d862c15f896d1c83e995b093e17332963e6d1245c2107ad9d0671930\" returns successfully" Nov 8 00:07:15.406380 containerd[1723]: time="2025-11-08T00:07:15.406349984Z" level=info msg="StopPodSandbox for \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\"" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.453 [WARNING][5894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"86741758-ba30-4cec-a95c-6af79e2546fe", ResourceVersion:"1261", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc", Pod:"calico-apiserver-59cf6d9fcc-nmv8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaee1f8ee43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.454 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.454 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" iface="eth0" netns="" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.454 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.454 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.481 [INFO][5901] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.481 [INFO][5901] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.481 [INFO][5901] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.495 [WARNING][5901] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.495 [INFO][5901] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.497 [INFO][5901] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:15.501805 containerd[1723]: 2025-11-08 00:07:15.500 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.505412 containerd[1723]: time="2025-11-08T00:07:15.501830118Z" level=info msg="TearDown network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\" successfully" Nov 8 00:07:15.505412 containerd[1723]: time="2025-11-08T00:07:15.501855638Z" level=info msg="StopPodSandbox for \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\" returns successfully" Nov 8 00:07:15.505412 containerd[1723]: time="2025-11-08T00:07:15.504140719Z" level=info msg="RemovePodSandbox for \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\"" Nov 8 00:07:15.505412 containerd[1723]: time="2025-11-08T00:07:15.504174839Z" level=info msg="Forcibly stopping sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\"" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.554 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0", GenerateName:"calico-apiserver-59cf6d9fcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"86741758-ba30-4cec-a95c-6af79e2546fe", ResourceVersion:"1261", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59cf6d9fcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-32f19bad4d", ContainerID:"549e8fbc4f2c868c7c56d28fa71c593b1e8e754dde14a42f3da21371e9a40afc", Pod:"calico-apiserver-59cf6d9fcc-nmv8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaee1f8ee43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.554 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.554 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" iface="eth0" netns="" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.554 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.554 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.579 [INFO][5923] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.579 [INFO][5923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.579 [INFO][5923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.588 [WARNING][5923] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.588 [INFO][5923] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" HandleID="k8s-pod-network.b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Workload="ci--4081.3.6--n--32f19bad4d-k8s-calico--apiserver--59cf6d9fcc--nmv8r-eth0" Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.590 [INFO][5923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:15.594655 containerd[1723]: 2025-11-08 00:07:15.592 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545" Nov 8 00:07:15.594655 containerd[1723]: time="2025-11-08T00:07:15.593859238Z" level=info msg="TearDown network for sandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\" successfully" Nov 8 00:07:15.601337 containerd[1723]: time="2025-11-08T00:07:15.601263601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:15.601337 containerd[1723]: time="2025-11-08T00:07:15.601332321Z" level=info msg="RemovePodSandbox \"b39d192468291b9ca34277c3e7347ca7e5027ff755a23f10b01fa0772fc81545\" returns successfully" Nov 8 00:07:19.808677 kubelet[3207]: E1108 00:07:19.808426 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:07:19.812826 kubelet[3207]: E1108 00:07:19.812690 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:07:25.809476 kubelet[3207]: E1108 00:07:25.808725 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:07:25.812798 kubelet[3207]: E1108 00:07:25.811349 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:07:25.813429 kubelet[3207]: E1108 00:07:25.813398 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:07:26.808368 kubelet[3207]: E1108 00:07:26.808020 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:07:30.809612 kubelet[3207]: E1108 00:07:30.809561 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:07:33.813817 kubelet[3207]: E1108 00:07:33.812173 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:07:36.807856 kubelet[3207]: E1108 00:07:36.807811 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:07:39.812077 kubelet[3207]: E1108 00:07:39.810902 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:07:39.812660 kubelet[3207]: E1108 00:07:39.812435 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:07:41.809393 kubelet[3207]: E1108 00:07:41.809350 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:07:41.811664 containerd[1723]: time="2025-11-08T00:07:41.811429012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:07:42.083515 containerd[1723]: time="2025-11-08T00:07:42.083330055Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:42.087447 containerd[1723]: time="2025-11-08T00:07:42.087395775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:07:42.087529 containerd[1723]: time="2025-11-08T00:07:42.087510495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:42.087695 kubelet[3207]: E1108 00:07:42.087651 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:42.088331 kubelet[3207]: E1108 00:07:42.087805 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:42.088331 kubelet[3207]: E1108 00:07:42.087960 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5t89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-69dnl_calico-system(a62546c2-2f66-4002-b0c3-d54109c52a13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:42.089548 kubelet[3207]: E1108 00:07:42.089109 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:07:45.811330 containerd[1723]: time="2025-11-08T00:07:45.811083891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:07:46.127967 containerd[1723]: time="2025-11-08T00:07:46.127839054Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:46.133762 containerd[1723]: time="2025-11-08T00:07:46.132437574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:07:46.133762 containerd[1723]: time="2025-11-08T00:07:46.132552414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:07:46.133920 kubelet[3207]: E1108 00:07:46.132736 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:46.137499 kubelet[3207]: E1108 00:07:46.132814 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:46.137673 kubelet[3207]: E1108 00:07:46.137630 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:46.140170 containerd[1723]: time="2025-11-08T00:07:46.139960334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:07:46.431514 containerd[1723]: time="2025-11-08T00:07:46.431456377Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:46.434529 containerd[1723]: time="2025-11-08T00:07:46.434465417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:07:46.434661 containerd[1723]: time="2025-11-08T00:07:46.434510217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:07:46.435436 kubelet[3207]: E1108 00:07:46.435190 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:46.435436 kubelet[3207]: E1108 00:07:46.435245 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:46.435436 kubelet[3207]: E1108 00:07:46.435372 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:46.436726 kubelet[3207]: E1108 00:07:46.436678 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:07:50.809640 containerd[1723]: time="2025-11-08T00:07:50.809570770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:51.079677 containerd[1723]: time="2025-11-08T00:07:51.079315405Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:51.083300 containerd[1723]: time="2025-11-08T00:07:51.083219725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:51.083414 containerd[1723]: time="2025-11-08T00:07:51.083330445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:51.083825 kubelet[3207]: E1108 00:07:51.083781 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:51.084198 kubelet[3207]: E1108 00:07:51.083832 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:51.084198 kubelet[3207]: E1108 00:07:51.084102 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phxrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-nmv8r_calico-apiserver(86741758-ba30-4cec-a95c-6af79e2546fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:51.085642 kubelet[3207]: E1108 00:07:51.085579 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:07:51.085928 containerd[1723]: time="2025-11-08T00:07:51.085887685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:07:51.366449 containerd[1723]: time="2025-11-08T00:07:51.365834000Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:51.370666 containerd[1723]: time="2025-11-08T00:07:51.370598720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:07:51.370799 containerd[1723]: time="2025-11-08T00:07:51.370709880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:07:51.370931 kubelet[3207]: E1108 00:07:51.370881 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:51.372228 kubelet[3207]: E1108 00:07:51.370937 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:51.372228 kubelet[3207]: E1108 00:07:51.371056 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41c4d131d342475c83c899a448c04516,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:51.373349 containerd[1723]: time="2025-11-08T00:07:51.373276200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:07:51.644901 containerd[1723]: time="2025-11-08T00:07:51.644460355Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:51.647995 containerd[1723]: time="2025-11-08T00:07:51.647952395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:07:51.648078 containerd[1723]: time="2025-11-08T00:07:51.648054355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:51.648225 kubelet[3207]: E1108 00:07:51.648185 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:51.648306 kubelet[3207]: E1108 00:07:51.648239 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:51.648706 kubelet[3207]: E1108 00:07:51.648382 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:51.650089 kubelet[3207]: E1108 00:07:51.650038 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:07:52.810565 containerd[1723]: time="2025-11-08T00:07:52.809872574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:07:53.085657 containerd[1723]: time="2025-11-08T00:07:53.085410209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:53.089582 containerd[1723]: time="2025-11-08T00:07:53.089454369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:07:53.089582 containerd[1723]: time="2025-11-08T00:07:53.089526849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:53.089950 kubelet[3207]: E1108 00:07:53.089798 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:53.090506 kubelet[3207]: E1108 00:07:53.089853 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:53.090506 kubelet[3207]: E1108 00:07:53.090419 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnwsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55f5cd74bd-vrdmh_calico-system(ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:53.091809 kubelet[3207]: E1108 00:07:53.091772 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:07:53.810339 containerd[1723]: time="2025-11-08T00:07:53.810249597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:54.080742 containerd[1723]: time="2025-11-08T00:07:54.080463072Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:54.083465 containerd[1723]: time="2025-11-08T00:07:54.083361032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:54.083465 containerd[1723]: time="2025-11-08T00:07:54.083443672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:54.084214 kubelet[3207]: E1108 00:07:54.083735 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:54.084214 kubelet[3207]: E1108 00:07:54.083792 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:54.084214 kubelet[3207]: E1108 00:07:54.083921 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czxjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-xrwqb_calico-apiserver(b965bf6a-7220-4fbc-b608-85c677cf8e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:54.085963 kubelet[3207]: E1108 00:07:54.085925 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:07:55.814789 kubelet[3207]: E1108 00:07:55.813484 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:07:56.808242 kubelet[3207]: E1108 00:07:56.808186 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:08:03.811624 kubelet[3207]: E1108 00:08:03.811504 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:08:04.069505 systemd[1]: Started sshd@7-10.200.20.44:22-10.200.16.10:58836.service - OpenSSH per-connection server daemon (10.200.16.10:58836). Nov 8 00:08:04.514189 sshd[5993]: Accepted publickey for core from 10.200.16.10 port 58836 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:04.517303 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:04.523251 systemd-logind[1678]: New session 10 of user core. Nov 8 00:08:04.531930 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:08:04.811889 kubelet[3207]: E1108 00:08:04.810061 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:08:04.813427 kubelet[3207]: E1108 00:08:04.813036 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:08:04.984227 sshd[5993]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:04.990142 systemd[1]: sshd@7-10.200.20.44:22-10.200.16.10:58836.service: Deactivated successfully. Nov 8 00:08:04.994054 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:08:04.996513 systemd-logind[1678]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:08:04.998438 systemd-logind[1678]: Removed session 10. Nov 8 00:08:06.810534 kubelet[3207]: E1108 00:08:06.810481 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:08:06.810994 kubelet[3207]: E1108 00:08:06.810893 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:08:10.079088 systemd[1]: Started sshd@8-10.200.20.44:22-10.200.16.10:60766.service - OpenSSH per-connection server daemon (10.200.16.10:60766). Nov 8 00:08:10.572775 sshd[6010]: Accepted publickey for core from 10.200.16.10 port 60766 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:10.573545 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:10.579851 systemd-logind[1678]: New session 11 of user core. Nov 8 00:08:10.583971 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:08:10.809714 kubelet[3207]: E1108 00:08:10.809641 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:08:11.012214 sshd[6010]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:11.017284 systemd-logind[1678]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:08:11.019149 systemd[1]: sshd@8-10.200.20.44:22-10.200.16.10:60766.service: Deactivated successfully. Nov 8 00:08:11.022426 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:08:11.026059 systemd-logind[1678]: Removed session 11. Nov 8 00:08:14.058985 systemd[1]: run-containerd-runc-k8s.io-710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238-runc.GPjBbP.mount: Deactivated successfully. Nov 8 00:08:15.811715 kubelet[3207]: E1108 00:08:15.811524 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:08:15.811715 kubelet[3207]: E1108 00:08:15.811645 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:08:16.108054 systemd[1]: Started sshd@9-10.200.20.44:22-10.200.16.10:60774.service - OpenSSH per-connection server daemon (10.200.16.10:60774). Nov 8 00:08:16.607579 sshd[6048]: Accepted publickey for core from 10.200.16.10 port 60774 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:16.609062 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:16.617319 systemd-logind[1678]: New session 12 of user core. Nov 8 00:08:16.620982 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:08:17.037237 sshd[6048]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:17.041487 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:08:17.042991 systemd-logind[1678]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:08:17.043299 systemd[1]: sshd@9-10.200.20.44:22-10.200.16.10:60774.service: Deactivated successfully. Nov 8 00:08:17.047096 systemd-logind[1678]: Removed session 12. Nov 8 00:08:17.809328 kubelet[3207]: E1108 00:08:17.809282 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:08:19.810128 kubelet[3207]: E1108 00:08:19.810081 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:08:21.809785 kubelet[3207]: E1108 00:08:21.808990 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:08:22.131039 systemd[1]: Started sshd@10-10.200.20.44:22-10.200.16.10:44846.service - OpenSSH per-connection server daemon (10.200.16.10:44846). Nov 8 00:08:22.619010 sshd[6064]: Accepted publickey for core from 10.200.16.10 port 44846 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:22.620444 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:22.626701 systemd-logind[1678]: New session 13 of user core. Nov 8 00:08:22.629935 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:08:22.810013 kubelet[3207]: E1108 00:08:22.809966 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:08:23.077109 sshd[6064]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:23.081214 systemd-logind[1678]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:08:23.082647 systemd[1]: sshd@10-10.200.20.44:22-10.200.16.10:44846.service: Deactivated successfully. Nov 8 00:08:23.086241 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:08:23.090235 systemd-logind[1678]: Removed session 13. Nov 8 00:08:26.807991 kubelet[3207]: E1108 00:08:26.807952 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:08:26.809398 kubelet[3207]: E1108 00:08:26.809279 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:08:28.161596 systemd[1]: Started sshd@11-10.200.20.44:22-10.200.16.10:44850.service - OpenSSH per-connection server daemon (10.200.16.10:44850). Nov 8 00:08:28.622657 sshd[6077]: Accepted publickey for core from 10.200.16.10 port 44850 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:28.624168 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:28.628894 systemd-logind[1678]: New session 14 of user core. Nov 8 00:08:28.632887 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:08:29.030110 sshd[6077]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:29.033719 systemd[1]: sshd@11-10.200.20.44:22-10.200.16.10:44850.service: Deactivated successfully. Nov 8 00:08:29.037213 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:08:29.038858 systemd-logind[1678]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:08:29.040161 systemd-logind[1678]: Removed session 14. Nov 8 00:08:29.123060 systemd[1]: Started sshd@12-10.200.20.44:22-10.200.16.10:44858.service - OpenSSH per-connection server daemon (10.200.16.10:44858). Nov 8 00:08:29.609945 sshd[6091]: Accepted publickey for core from 10.200.16.10 port 44858 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:29.612336 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:29.621638 systemd-logind[1678]: New session 15 of user core. Nov 8 00:08:29.626966 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:08:29.808205 kubelet[3207]: E1108 00:08:29.807926 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:08:30.079977 sshd[6091]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:30.083773 systemd[1]: sshd@12-10.200.20.44:22-10.200.16.10:44858.service: Deactivated successfully. Nov 8 00:08:30.087948 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:08:30.088681 systemd-logind[1678]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:08:30.089555 systemd-logind[1678]: Removed session 15. Nov 8 00:08:30.168374 systemd[1]: Started sshd@13-10.200.20.44:22-10.200.16.10:44484.service - OpenSSH per-connection server daemon (10.200.16.10:44484). Nov 8 00:08:30.623897 sshd[6102]: Accepted publickey for core from 10.200.16.10 port 44484 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:30.625445 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:30.632734 systemd-logind[1678]: New session 16 of user core. Nov 8 00:08:30.639917 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:08:31.073945 sshd[6102]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:31.077653 systemd-logind[1678]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:08:31.079299 systemd[1]: sshd@13-10.200.20.44:22-10.200.16.10:44484.service: Deactivated successfully. Nov 8 00:08:31.082560 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:08:31.086212 systemd-logind[1678]: Removed session 16. Nov 8 00:08:33.811902 kubelet[3207]: E1108 00:08:33.811460 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:08:33.820659 kubelet[3207]: E1108 00:08:33.820595 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:08:36.153836 systemd[1]: Started sshd@14-10.200.20.44:22-10.200.16.10:44492.service - OpenSSH per-connection server daemon (10.200.16.10:44492). Nov 8 00:08:36.607566 sshd[6119]: Accepted publickey for core from 10.200.16.10 port 44492 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:36.609022 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:36.614069 systemd-logind[1678]: New session 17 of user core. Nov 8 00:08:36.619035 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:08:36.807324 kubelet[3207]: E1108 00:08:36.807196 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:08:37.058461 sshd[6119]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:37.061461 systemd-logind[1678]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:08:37.063263 systemd[1]: sshd@14-10.200.20.44:22-10.200.16.10:44492.service: Deactivated successfully. Nov 8 00:08:37.066719 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:08:37.069498 systemd-logind[1678]: Removed session 17. Nov 8 00:08:39.809298 kubelet[3207]: E1108 00:08:39.808922 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:08:40.808542 kubelet[3207]: E1108 00:08:40.808154 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:08:40.809167 kubelet[3207]: E1108 00:08:40.809134 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:08:42.148046 systemd[1]: Started sshd@15-10.200.20.44:22-10.200.16.10:35446.service - OpenSSH per-connection server daemon (10.200.16.10:35446). Nov 8 00:08:42.597970 sshd[6132]: Accepted publickey for core from 10.200.16.10 port 35446 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:42.599261 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:42.603380 systemd-logind[1678]: New session 18 of user core. Nov 8 00:08:42.611917 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:08:43.008007 sshd[6132]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:43.011323 systemd[1]: sshd@15-10.200.20.44:22-10.200.16.10:35446.service: Deactivated successfully. Nov 8 00:08:43.014341 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:08:43.015603 systemd-logind[1678]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:08:43.018167 systemd-logind[1678]: Removed session 18. Nov 8 00:08:47.810191 kubelet[3207]: E1108 00:08:47.810137 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:08:48.117104 systemd[1]: Started sshd@16-10.200.20.44:22-10.200.16.10:35450.service - OpenSSH per-connection server daemon (10.200.16.10:35450). Nov 8 00:08:48.569193 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 35450 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:48.572951 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:48.581645 systemd-logind[1678]: New session 19 of user core. Nov 8 00:08:48.584380 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:08:48.808631 kubelet[3207]: E1108 00:08:48.808587 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:08:49.002086 sshd[6168]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:49.007094 systemd[1]: sshd@16-10.200.20.44:22-10.200.16.10:35450.service: Deactivated successfully. Nov 8 00:08:49.011590 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:08:49.016146 systemd-logind[1678]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:08:49.018120 systemd-logind[1678]: Removed session 19. Nov 8 00:08:51.809912 kubelet[3207]: E1108 00:08:51.809559 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:08:51.812484 kubelet[3207]: E1108 00:08:51.812410 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:08:53.810743 kubelet[3207]: E1108 00:08:53.810568 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:08:53.813264 kubelet[3207]: E1108 00:08:53.812995 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:08:54.110100 systemd[1]: Started sshd@17-10.200.20.44:22-10.200.16.10:42162.service - OpenSSH per-connection server daemon (10.200.16.10:42162). Nov 8 00:08:54.600583 sshd[6183]: Accepted publickey for core from 10.200.16.10 port 42162 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:08:54.600439 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:54.607440 systemd-logind[1678]: New session 20 of user core. Nov 8 00:08:54.611929 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:08:55.080908 sshd[6183]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:55.086859 systemd-logind[1678]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:08:55.088180 systemd[1]: sshd@17-10.200.20.44:22-10.200.16.10:42162.service: Deactivated successfully. Nov 8 00:08:55.091678 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:08:55.094650 systemd-logind[1678]: Removed session 20. Nov 8 00:09:00.176065 systemd[1]: Started sshd@18-10.200.20.44:22-10.200.16.10:38522.service - OpenSSH per-connection server daemon (10.200.16.10:38522). Nov 8 00:09:00.668341 sshd[6207]: Accepted publickey for core from 10.200.16.10 port 38522 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:00.669979 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:00.676722 systemd-logind[1678]: New session 21 of user core. Nov 8 00:09:00.678937 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:09:01.106235 sshd[6207]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:01.111300 systemd[1]: sshd@18-10.200.20.44:22-10.200.16.10:38522.service: Deactivated successfully. Nov 8 00:09:01.114330 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:09:01.115293 systemd-logind[1678]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:09:01.116336 systemd-logind[1678]: Removed session 21. Nov 8 00:09:01.811728 kubelet[3207]: E1108 00:09:01.811324 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:09:02.808995 kubelet[3207]: E1108 00:09:02.808944 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:09:04.809360 kubelet[3207]: E1108 00:09:04.809040 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:09:04.809974 kubelet[3207]: E1108 00:09:04.809933 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:09:05.812006 kubelet[3207]: E1108 00:09:05.811338 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:09:06.203525 systemd[1]: Started sshd@19-10.200.20.44:22-10.200.16.10:38538.service - OpenSSH per-connection server daemon (10.200.16.10:38538). Nov 8 00:09:06.693431 sshd[6222]: Accepted publickey for core from 10.200.16.10 port 38538 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:06.694808 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:06.698917 systemd-logind[1678]: New session 22 of user core. Nov 8 00:09:06.702958 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:09:06.808053 containerd[1723]: time="2025-11-08T00:09:06.807836667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:09:07.105797 sshd[6222]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:07.109813 systemd[1]: sshd@19-10.200.20.44:22-10.200.16.10:38538.service: Deactivated successfully. Nov 8 00:09:07.112923 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:09:07.114244 containerd[1723]: time="2025-11-08T00:09:07.113244629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:07.114524 systemd-logind[1678]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:09:07.116192 systemd-logind[1678]: Removed session 22. Nov 8 00:09:07.117061 containerd[1723]: time="2025-11-08T00:09:07.116560470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:09:07.117061 containerd[1723]: time="2025-11-08T00:09:07.116612590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:07.117532 kubelet[3207]: E1108 00:09:07.117252 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:07.117532 kubelet[3207]: E1108 00:09:07.117327 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:07.119059 kubelet[3207]: E1108 00:09:07.118909 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5t89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-69dnl_calico-system(a62546c2-2f66-4002-b0c3-d54109c52a13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:07.120236 kubelet[3207]: E1108 00:09:07.120184 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:09:12.200406 systemd[1]: Started sshd@20-10.200.20.44:22-10.200.16.10:44800.service - OpenSSH per-connection server daemon (10.200.16.10:44800). Nov 8 00:09:12.655689 sshd[6235]: Accepted publickey for core from 10.200.16.10 port 44800 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:12.657150 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:12.662992 systemd-logind[1678]: New session 23 of user core. Nov 8 00:09:12.668925 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:09:12.808706 containerd[1723]: time="2025-11-08T00:09:12.807889810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:09:13.073717 sshd[6235]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:13.074497 containerd[1723]: time="2025-11-08T00:09:13.074449771Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:13.079233 systemd[1]: sshd@20-10.200.20.44:22-10.200.16.10:44800.service: Deactivated successfully. Nov 8 00:09:13.081436 containerd[1723]: time="2025-11-08T00:09:13.081384531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:09:13.081546 containerd[1723]: time="2025-11-08T00:09:13.081501491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:09:13.081733 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:09:13.083211 kubelet[3207]: E1108 00:09:13.081853 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:13.083211 kubelet[3207]: E1108 00:09:13.081902 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:13.083211 kubelet[3207]: E1108 00:09:13.082024 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:13.084814 systemd-logind[1678]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:09:13.085605 containerd[1723]: time="2025-11-08T00:09:13.085575131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:09:13.087391 systemd-logind[1678]: Removed session 23. Nov 8 00:09:13.377045 containerd[1723]: time="2025-11-08T00:09:13.376908412Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:13.380310 containerd[1723]: time="2025-11-08T00:09:13.380255412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:09:13.380411 containerd[1723]: time="2025-11-08T00:09:13.380367532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:09:13.380576 kubelet[3207]: E1108 00:09:13.380531 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:13.380641 kubelet[3207]: E1108 00:09:13.380591 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:13.380959 kubelet[3207]: E1108 00:09:13.380716 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2vpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tffst_calico-system(b8e7681d-343c-40ff-9257-cd6bf2941900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:13.382276 kubelet[3207]: E1108 00:09:13.382218 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:09:14.060380 systemd[1]: run-containerd-runc-k8s.io-710fcb4454d500ef2d0674038608fbb1a5122e25a0a163c7182f8b8ad415f238-runc.0AMRur.mount: Deactivated successfully. Nov 8 00:09:17.811220 containerd[1723]: time="2025-11-08T00:09:17.811177833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:18.065016 containerd[1723]: time="2025-11-08T00:09:18.064859679Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:18.069302 containerd[1723]: time="2025-11-08T00:09:18.068669359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:18.069302 containerd[1723]: time="2025-11-08T00:09:18.068793959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:18.069494 kubelet[3207]: E1108 00:09:18.069016 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:18.069494 kubelet[3207]: E1108 00:09:18.069078 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:18.069892 kubelet[3207]: E1108 00:09:18.069417 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czxjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-xrwqb_calico-apiserver(b965bf6a-7220-4fbc-b608-85c677cf8e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:18.070828 kubelet[3207]: E1108 00:09:18.070636 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:09:18.158008 systemd[1]: Started sshd@21-10.200.20.44:22-10.200.16.10:44816.service - OpenSSH per-connection server daemon (10.200.16.10:44816). Nov 8 00:09:18.610323 sshd[6272]: Accepted publickey for core from 10.200.16.10 port 44816 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:18.611699 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:18.617674 systemd-logind[1678]: New session 24 of user core. Nov 8 00:09:18.622922 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:09:19.037402 sshd[6272]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:19.044097 systemd[1]: sshd@21-10.200.20.44:22-10.200.16.10:44816.service: Deactivated successfully. Nov 8 00:09:19.048462 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:09:19.053332 systemd-logind[1678]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:09:19.055878 systemd-logind[1678]: Removed session 24. Nov 8 00:09:19.809502 containerd[1723]: time="2025-11-08T00:09:19.809458282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:09:20.087489 containerd[1723]: time="2025-11-08T00:09:20.087133729Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:20.091832 containerd[1723]: time="2025-11-08T00:09:20.091743609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:09:20.091975 containerd[1723]: time="2025-11-08T00:09:20.091871529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:20.092097 kubelet[3207]: E1108 00:09:20.092038 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:20.092649 kubelet[3207]: E1108 00:09:20.092104 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:20.092649 kubelet[3207]: E1108 00:09:20.092333 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnwsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-55f5cd74bd-vrdmh_calico-system(ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:20.092799 containerd[1723]: time="2025-11-08T00:09:20.092612609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:09:20.094008 kubelet[3207]: E1108 00:09:20.093953 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:09:20.424474 containerd[1723]: time="2025-11-08T00:09:20.424411177Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:20.429919 containerd[1723]: time="2025-11-08T00:09:20.429867377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:09:20.429998 containerd[1723]: time="2025-11-08T00:09:20.429972337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:09:20.430700 kubelet[3207]: E1108 00:09:20.430147 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:20.430700 kubelet[3207]: E1108 00:09:20.430198 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:20.430700 kubelet[3207]: E1108 00:09:20.430383 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:41c4d131d342475c83c899a448c04516,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:20.431007 containerd[1723]: time="2025-11-08T00:09:20.430837617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:20.697977 containerd[1723]: time="2025-11-08T00:09:20.697831104Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:20.705768 containerd[1723]: time="2025-11-08T00:09:20.705687464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:20.707255 containerd[1723]: time="2025-11-08T00:09:20.705806664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:20.707255 containerd[1723]: time="2025-11-08T00:09:20.706809064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:09:20.707330 kubelet[3207]: E1108 00:09:20.706029 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:20.707330 kubelet[3207]: E1108 00:09:20.706076 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:20.707330 kubelet[3207]: E1108 00:09:20.706275 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phxrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59cf6d9fcc-nmv8r_calico-apiserver(86741758-ba30-4cec-a95c-6af79e2546fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:20.707973 kubelet[3207]: E1108 00:09:20.707820 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:09:20.808138 kubelet[3207]: E1108 00:09:20.808098 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:09:20.996987 containerd[1723]: time="2025-11-08T00:09:20.996843511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:21.001578 containerd[1723]: time="2025-11-08T00:09:21.001528071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:09:21.001654 containerd[1723]: time="2025-11-08T00:09:21.001640031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:21.001836 kubelet[3207]: E1108 00:09:21.001794 3207 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:21.001899 kubelet[3207]: E1108 00:09:21.001847 3207 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:21.002004 kubelet[3207]: E1108 00:09:21.001961 3207 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xpfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f59845cf4-td2zr_calico-system(b8a248aa-7909-4555-958d-ace4846b2c48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:21.003349 kubelet[3207]: E1108 00:09:21.003285 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:09:24.121832 systemd[1]: Started sshd@22-10.200.20.44:22-10.200.16.10:58462.service - OpenSSH per-connection server daemon (10.200.16.10:58462). Nov 8 00:09:24.610035 sshd[6301]: Accepted publickey for core from 10.200.16.10 port 58462 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:24.611438 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:24.618175 systemd-logind[1678]: New session 25 of user core. Nov 8 00:09:24.621906 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:09:25.044290 sshd[6301]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:25.048793 systemd-logind[1678]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:09:25.048985 systemd[1]: sshd@22-10.200.20.44:22-10.200.16.10:58462.service: Deactivated successfully. Nov 8 00:09:25.052174 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:09:25.053990 systemd-logind[1678]: Removed session 25. Nov 8 00:09:25.132981 systemd[1]: Started sshd@23-10.200.20.44:22-10.200.16.10:58474.service - OpenSSH per-connection server daemon (10.200.16.10:58474). Nov 8 00:09:25.623554 sshd[6314]: Accepted publickey for core from 10.200.16.10 port 58474 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:25.625088 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:25.632553 systemd-logind[1678]: New session 26 of user core. Nov 8 00:09:25.639948 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:09:25.812821 kubelet[3207]: E1108 00:09:25.811760 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:09:26.159145 sshd[6314]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:26.163055 systemd[1]: sshd@23-10.200.20.44:22-10.200.16.10:58474.service: Deactivated successfully. Nov 8 00:09:26.166017 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:09:26.167214 systemd-logind[1678]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:09:26.168244 systemd-logind[1678]: Removed session 26. Nov 8 00:09:26.253999 systemd[1]: Started sshd@24-10.200.20.44:22-10.200.16.10:58478.service - OpenSSH per-connection server daemon (10.200.16.10:58478). Nov 8 00:09:26.738569 sshd[6325]: Accepted publickey for core from 10.200.16.10 port 58478 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:26.740595 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:26.746121 systemd-logind[1678]: New session 27 of user core. Nov 8 00:09:26.750941 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:09:27.854072 sshd[6325]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:27.860359 systemd[1]: sshd@24-10.200.20.44:22-10.200.16.10:58478.service: Deactivated successfully. Nov 8 00:09:27.863700 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:09:27.864534 systemd-logind[1678]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:09:27.865681 systemd-logind[1678]: Removed session 27. Nov 8 00:09:27.954014 systemd[1]: Started sshd@25-10.200.20.44:22-10.200.16.10:58480.service - OpenSSH per-connection server daemon (10.200.16.10:58480). Nov 8 00:09:28.451782 sshd[6344]: Accepted publickey for core from 10.200.16.10 port 58480 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:28.793230 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:28.798461 systemd-logind[1678]: New session 28 of user core. Nov 8 00:09:28.803922 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 8 00:09:29.305883 sshd[6344]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:29.310029 systemd[1]: sshd@25-10.200.20.44:22-10.200.16.10:58480.service: Deactivated successfully. Nov 8 00:09:29.312787 systemd[1]: session-28.scope: Deactivated successfully. Nov 8 00:09:29.315294 systemd-logind[1678]: Session 28 logged out. Waiting for processes to exit. Nov 8 00:09:29.317169 systemd-logind[1678]: Removed session 28. Nov 8 00:09:29.389546 systemd[1]: Started sshd@26-10.200.20.44:22-10.200.16.10:58488.service - OpenSSH per-connection server daemon (10.200.16.10:58488). Nov 8 00:09:29.851341 sshd[6363]: Accepted publickey for core from 10.200.16.10 port 58488 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:29.852727 sshd[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:29.861876 systemd-logind[1678]: New session 29 of user core. Nov 8 00:09:29.866940 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 8 00:09:30.290250 sshd[6363]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:30.295358 systemd-logind[1678]: Session 29 logged out. Waiting for processes to exit. Nov 8 00:09:30.296338 systemd[1]: sshd@26-10.200.20.44:22-10.200.16.10:58488.service: Deactivated successfully. Nov 8 00:09:30.300489 systemd[1]: session-29.scope: Deactivated successfully. Nov 8 00:09:30.302627 systemd-logind[1678]: Removed session 29. Nov 8 00:09:30.809385 kubelet[3207]: E1108 00:09:30.809060 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:09:32.809706 kubelet[3207]: E1108 00:09:32.809195 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:09:32.809706 kubelet[3207]: E1108 00:09:32.809565 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:09:32.809706 kubelet[3207]: E1108 00:09:32.809629 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:09:34.810978 kubelet[3207]: E1108 00:09:34.810798 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:09:35.379050 systemd[1]: Started sshd@27-10.200.20.44:22-10.200.16.10:49132.service - OpenSSH per-connection server daemon (10.200.16.10:49132). Nov 8 00:09:35.827105 sshd[6376]: Accepted publickey for core from 10.200.16.10 port 49132 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:35.827691 sshd[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:35.831800 systemd-logind[1678]: New session 30 of user core. Nov 8 00:09:35.836924 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 8 00:09:36.245505 sshd[6376]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:36.249281 systemd-logind[1678]: Session 30 logged out. Waiting for processes to exit. Nov 8 00:09:36.251557 systemd[1]: sshd@27-10.200.20.44:22-10.200.16.10:49132.service: Deactivated successfully. Nov 8 00:09:36.255081 systemd[1]: session-30.scope: Deactivated successfully. Nov 8 00:09:36.257155 systemd-logind[1678]: Removed session 30. Nov 8 00:09:37.809843 kubelet[3207]: E1108 00:09:37.809797 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:09:41.330241 systemd[1]: Started sshd@28-10.200.20.44:22-10.200.16.10:43504.service - OpenSSH per-connection server daemon (10.200.16.10:43504). Nov 8 00:09:41.781680 sshd[6389]: Accepted publickey for core from 10.200.16.10 port 43504 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:41.783138 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:41.786938 systemd-logind[1678]: New session 31 of user core. Nov 8 00:09:41.790917 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 8 00:09:41.807863 kubelet[3207]: E1108 00:09:41.807791 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:09:42.179941 sshd[6389]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:42.183596 systemd[1]: sshd@28-10.200.20.44:22-10.200.16.10:43504.service: Deactivated successfully. Nov 8 00:09:42.186356 systemd[1]: session-31.scope: Deactivated successfully. Nov 8 00:09:42.189281 systemd-logind[1678]: Session 31 logged out. Waiting for processes to exit. Nov 8 00:09:42.190253 systemd-logind[1678]: Removed session 31. Nov 8 00:09:43.808446 kubelet[3207]: E1108 00:09:43.808402 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:09:44.810311 kubelet[3207]: E1108 00:09:44.810016 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:09:45.809917 kubelet[3207]: E1108 00:09:45.809867 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:09:47.261623 systemd[1]: Started sshd@29-10.200.20.44:22-10.200.16.10:43508.service - OpenSSH per-connection server daemon (10.200.16.10:43508). Nov 8 00:09:47.715868 sshd[6426]: Accepted publickey for core from 10.200.16.10 port 43508 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:47.716814 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:47.722257 systemd-logind[1678]: New session 32 of user core. Nov 8 00:09:47.727666 systemd[1]: Started session-32.scope - Session 32 of User core. Nov 8 00:09:47.809443 kubelet[3207]: E1108 00:09:47.809257 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:09:48.110615 sshd[6426]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:48.114552 systemd[1]: sshd@29-10.200.20.44:22-10.200.16.10:43508.service: Deactivated successfully. Nov 8 00:09:48.116257 systemd[1]: session-32.scope: Deactivated successfully. Nov 8 00:09:48.117144 systemd-logind[1678]: Session 32 logged out. Waiting for processes to exit. Nov 8 00:09:48.118203 systemd-logind[1678]: Removed session 32. Nov 8 00:09:52.808131 kubelet[3207]: E1108 00:09:52.808081 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:09:53.210192 systemd[1]: Started sshd@30-10.200.20.44:22-10.200.16.10:36424.service - OpenSSH per-connection server daemon (10.200.16.10:36424). Nov 8 00:09:53.699674 sshd[6441]: Accepted publickey for core from 10.200.16.10 port 36424 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:53.701068 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:53.706342 systemd-logind[1678]: New session 33 of user core. Nov 8 00:09:53.710935 systemd[1]: Started session-33.scope - Session 33 of User core. Nov 8 00:09:53.809609 kubelet[3207]: E1108 00:09:53.809400 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:09:54.142458 sshd[6441]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:54.148286 systemd[1]: sshd@30-10.200.20.44:22-10.200.16.10:36424.service: Deactivated successfully. Nov 8 00:09:54.153990 systemd[1]: session-33.scope: Deactivated successfully. Nov 8 00:09:54.155492 systemd-logind[1678]: Session 33 logged out. Waiting for processes to exit. Nov 8 00:09:54.159948 systemd-logind[1678]: Removed session 33. Nov 8 00:09:57.808344 kubelet[3207]: E1108 00:09:57.808274 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:09:57.809930 kubelet[3207]: E1108 00:09:57.809609 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13" Nov 8 00:09:59.239017 systemd[1]: Started sshd@31-10.200.20.44:22-10.200.16.10:36428.service - OpenSSH per-connection server daemon (10.200.16.10:36428). Nov 8 00:09:59.724258 sshd[6454]: Accepted publickey for core from 10.200.16.10 port 36428 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:09:59.725969 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:09:59.733262 systemd-logind[1678]: New session 34 of user core. Nov 8 00:09:59.738311 systemd[1]: Started session-34.scope - Session 34 of User core. Nov 8 00:09:59.814003 kubelet[3207]: E1108 00:09:59.813955 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-nmv8r" podUID="86741758-ba30-4cec-a95c-6af79e2546fe" Nov 8 00:09:59.814858 kubelet[3207]: E1108 00:09:59.814654 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:10:00.168700 sshd[6454]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:00.173908 systemd[1]: sshd@31-10.200.20.44:22-10.200.16.10:36428.service: Deactivated successfully. Nov 8 00:10:00.178653 systemd[1]: session-34.scope: Deactivated successfully. Nov 8 00:10:00.183044 systemd-logind[1678]: Session 34 logged out. Waiting for processes to exit. Nov 8 00:10:00.184935 systemd-logind[1678]: Removed session 34. Nov 8 00:10:04.808524 kubelet[3207]: E1108 00:10:04.808140 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55f5cd74bd-vrdmh" podUID="ffd8ec67-9184-4d7a-a1fa-e51a9fa4d9cf" Nov 8 00:10:05.262974 systemd[1]: Started sshd@32-10.200.20.44:22-10.200.16.10:55378.service - OpenSSH per-connection server daemon (10.200.16.10:55378). Nov 8 00:10:05.750136 sshd[6466]: Accepted publickey for core from 10.200.16.10 port 55378 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:05.751734 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:05.758994 systemd-logind[1678]: New session 35 of user core. Nov 8 00:10:05.763320 systemd[1]: Started session-35.scope - Session 35 of User core. Nov 8 00:10:06.187889 sshd[6466]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:06.192332 systemd[1]: sshd@32-10.200.20.44:22-10.200.16.10:55378.service: Deactivated successfully. Nov 8 00:10:06.194446 systemd[1]: session-35.scope: Deactivated successfully. Nov 8 00:10:06.195290 systemd-logind[1678]: Session 35 logged out. Waiting for processes to exit. Nov 8 00:10:06.196615 systemd-logind[1678]: Removed session 35. Nov 8 00:10:06.808661 kubelet[3207]: E1108 00:10:06.808601 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tffst" podUID="b8e7681d-343c-40ff-9257-cd6bf2941900" Nov 8 00:10:09.809251 kubelet[3207]: E1108 00:10:09.809001 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59cf6d9fcc-xrwqb" podUID="b965bf6a-7220-4fbc-b608-85c677cf8e39" Nov 8 00:10:11.285041 systemd[1]: Started sshd@33-10.200.20.44:22-10.200.16.10:51468.service - OpenSSH per-connection server daemon (10.200.16.10:51468). Nov 8 00:10:11.771917 sshd[6479]: Accepted publickey for core from 10.200.16.10 port 51468 ssh2: RSA SHA256:zvC4izVp6tNI33lG/DNWDshKRYTAPzAA5sOYQjrKueA Nov 8 00:10:11.773905 sshd[6479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:10:11.778201 systemd-logind[1678]: New session 36 of user core. Nov 8 00:10:11.784073 systemd[1]: Started session-36.scope - Session 36 of User core. Nov 8 00:10:11.811378 kubelet[3207]: E1108 00:10:11.811299 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f59845cf4-td2zr" podUID="b8a248aa-7909-4555-958d-ace4846b2c48" Nov 8 00:10:12.197333 sshd[6479]: pam_unix(sshd:session): session closed for user core Nov 8 00:10:12.203938 systemd[1]: sshd@33-10.200.20.44:22-10.200.16.10:51468.service: Deactivated successfully. Nov 8 00:10:12.206607 systemd[1]: session-36.scope: Deactivated successfully. Nov 8 00:10:12.212013 systemd-logind[1678]: Session 36 logged out. Waiting for processes to exit. Nov 8 00:10:12.212999 systemd-logind[1678]: Removed session 36. Nov 8 00:10:12.807179 kubelet[3207]: E1108 00:10:12.807141 3207 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-69dnl" podUID="a62546c2-2f66-4002-b0c3-d54109c52a13"