Nov 23 22:48:12.773852 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 22:48:12.773874 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:49:09 -00 2025 Nov 23 22:48:12.773884 kernel: KASLR enabled Nov 23 22:48:12.773889 kernel: efi: EFI v2.7 by EDK II Nov 23 22:48:12.773895 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 23 22:48:12.773900 kernel: random: crng init done Nov 23 22:48:12.773907 kernel: secureboot: Secure boot disabled Nov 23 22:48:12.773913 kernel: ACPI: Early table checksum verification disabled Nov 23 22:48:12.773919 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 23 22:48:12.773926 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 23 22:48:12.773932 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773938 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773943 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773949 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773956 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773964 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773971 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773977 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773983 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:48:12.773991 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 23 22:48:12.774000 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 22:48:12.774010 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 22:48:12.774016 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 23 22:48:12.774022 kernel: Zone ranges: Nov 23 22:48:12.774029 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 22:48:12.774036 kernel: DMA32 empty Nov 23 22:48:12.774042 kernel: Normal empty Nov 23 22:48:12.774048 kernel: Device empty Nov 23 22:48:12.774054 kernel: Movable zone start for each node Nov 23 22:48:12.774060 kernel: Early memory node ranges Nov 23 22:48:12.774066 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 23 22:48:12.774072 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 23 22:48:12.774078 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 23 22:48:12.774084 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 23 22:48:12.774091 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 23 22:48:12.774097 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 23 22:48:12.774103 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 23 22:48:12.774110 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 23 22:48:12.774116 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 23 22:48:12.774122 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 23 22:48:12.774131 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 23 22:48:12.774138 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 23 22:48:12.774144 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 23 22:48:12.774153 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 22:48:12.774170 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 23 22:48:12.774177 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 23 22:48:12.774183 kernel: psci: probing for conduit method from ACPI. Nov 23 22:48:12.774190 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 22:48:12.774196 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 22:48:12.774203 kernel: psci: Trusted OS migration not required Nov 23 22:48:12.774209 kernel: psci: SMC Calling Convention v1.1 Nov 23 22:48:12.774216 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 22:48:12.774222 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 22:48:12.774232 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 22:48:12.774238 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 23 22:48:12.774245 kernel: Detected PIPT I-cache on CPU0 Nov 23 22:48:12.774252 kernel: CPU features: detected: GIC system register CPU interface Nov 23 22:48:12.774258 kernel: CPU features: detected: Spectre-v4 Nov 23 22:48:12.774265 kernel: CPU features: detected: Spectre-BHB Nov 23 22:48:12.774271 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 22:48:12.774277 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 22:48:12.774284 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 22:48:12.774290 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 22:48:12.774297 kernel: alternatives: applying boot alternatives Nov 23 22:48:12.774305 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:48:12.774313 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 22:48:12.774320 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 22:48:12.774326 kernel: Fallback order for Node 0: 0 Nov 23 22:48:12.774333 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 23 22:48:12.774339 kernel: Policy zone: DMA Nov 23 22:48:12.774345 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 22:48:12.774352 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 23 22:48:12.774358 kernel: software IO TLB: area num 4. Nov 23 22:48:12.774365 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 23 22:48:12.774377 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 23 22:48:12.774384 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 23 22:48:12.774392 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 22:48:12.774400 kernel: rcu: RCU event tracing is enabled. Nov 23 22:48:12.774406 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 23 22:48:12.774413 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 22:48:12.774419 kernel: Tracing variant of Tasks RCU enabled. Nov 23 22:48:12.774426 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 22:48:12.774432 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 23 22:48:12.774439 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 22:48:12.774446 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 22:48:12.774452 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 22:48:12.774458 kernel: GICv3: 256 SPIs implemented Nov 23 22:48:12.774466 kernel: GICv3: 0 Extended SPIs implemented Nov 23 22:48:12.774472 kernel: Root IRQ handler: gic_handle_irq Nov 23 22:48:12.774479 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 22:48:12.774486 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 22:48:12.774492 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 22:48:12.774499 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 22:48:12.774506 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 23 22:48:12.774512 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 23 22:48:12.774519 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 23 22:48:12.774526 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 23 22:48:12.774533 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 22:48:12.774539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:48:12.774547 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 22:48:12.774554 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 22:48:12.774560 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 22:48:12.774567 kernel: arm-pv: using stolen time PV Nov 23 22:48:12.774574 kernel: Console: colour dummy device 80x25 Nov 23 22:48:12.774581 kernel: ACPI: Core revision 20240827 Nov 23 22:48:12.774588 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 22:48:12.774594 kernel: pid_max: default: 32768 minimum: 301 Nov 23 22:48:12.774601 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 22:48:12.774607 kernel: landlock: Up and running. Nov 23 22:48:12.774618 kernel: SELinux: Initializing. Nov 23 22:48:12.774625 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:48:12.774632 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:48:12.774639 kernel: rcu: Hierarchical SRCU implementation. Nov 23 22:48:12.774646 kernel: rcu: Max phase no-delay instances is 400. Nov 23 22:48:12.774653 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 22:48:12.774660 kernel: Remapping and enabling EFI services. Nov 23 22:48:12.774666 kernel: smp: Bringing up secondary CPUs ... Nov 23 22:48:12.774683 kernel: Detected PIPT I-cache on CPU1 Nov 23 22:48:12.774697 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 22:48:12.774704 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 23 22:48:12.774711 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:48:12.774719 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 22:48:12.774726 kernel: Detected PIPT I-cache on CPU2 Nov 23 22:48:12.774733 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 23 22:48:12.774740 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 23 22:48:12.774747 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:48:12.774756 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 23 22:48:12.774762 kernel: Detected PIPT I-cache on CPU3 Nov 23 22:48:12.774769 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 23 22:48:12.774776 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 23 22:48:12.774783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:48:12.774790 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 23 22:48:12.774796 kernel: smp: Brought up 1 node, 4 CPUs Nov 23 22:48:12.774803 kernel: SMP: Total of 4 processors activated. Nov 23 22:48:12.774810 kernel: CPU: All CPU(s) started at EL1 Nov 23 22:48:12.774819 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 22:48:12.774826 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 22:48:12.774833 kernel: CPU features: detected: Common not Private translations Nov 23 22:48:12.774839 kernel: CPU features: detected: CRC32 instructions Nov 23 22:48:12.774846 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 22:48:12.774853 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 22:48:12.774860 kernel: CPU features: detected: LSE atomic instructions Nov 23 22:48:12.774867 kernel: CPU features: detected: Privileged Access Never Nov 23 22:48:12.774874 kernel: CPU features: detected: RAS Extension Support Nov 23 22:48:12.774882 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 22:48:12.774889 kernel: alternatives: applying system-wide alternatives Nov 23 22:48:12.774895 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 23 22:48:12.774903 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Nov 23 22:48:12.774910 kernel: devtmpfs: initialized Nov 23 22:48:12.774917 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 22:48:12.774924 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 23 22:48:12.774931 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 22:48:12.774938 kernel: 0 pages in range for non-PLT usage Nov 23 22:48:12.774947 kernel: 508400 pages in range for PLT usage Nov 23 22:48:12.774954 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 22:48:12.774961 kernel: SMBIOS 3.0.0 present. Nov 23 22:48:12.774968 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 23 22:48:12.774975 kernel: DMI: Memory slots populated: 1/1 Nov 23 22:48:12.774982 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 22:48:12.774989 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 22:48:12.774996 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 22:48:12.775003 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 22:48:12.775011 kernel: audit: initializing netlink subsys (disabled) Nov 23 22:48:12.775023 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Nov 23 22:48:12.775030 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 22:48:12.775036 kernel: cpuidle: using governor menu Nov 23 22:48:12.775043 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 22:48:12.775050 kernel: ASID allocator initialised with 32768 entries Nov 23 22:48:12.775057 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 22:48:12.775064 kernel: Serial: AMBA PL011 UART driver Nov 23 22:48:12.775070 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 22:48:12.775079 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 22:48:12.775086 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 22:48:12.775092 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 22:48:12.775099 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 22:48:12.775106 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 22:48:12.775113 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 22:48:12.775119 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 22:48:12.775126 kernel: ACPI: Added _OSI(Module Device) Nov 23 22:48:12.775133 kernel: ACPI: Added _OSI(Processor Device) Nov 23 22:48:12.775141 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 22:48:12.775148 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 22:48:12.775159 kernel: ACPI: Interpreter enabled Nov 23 22:48:12.775166 kernel: ACPI: Using GIC for interrupt routing Nov 23 22:48:12.775173 kernel: ACPI: MCFG table detected, 1 entries Nov 23 22:48:12.775180 kernel: ACPI: CPU0 has been hot-added Nov 23 22:48:12.775187 kernel: ACPI: CPU1 has been hot-added Nov 23 22:48:12.775193 kernel: ACPI: CPU2 has been hot-added Nov 23 22:48:12.775200 kernel: ACPI: CPU3 has been hot-added Nov 23 22:48:12.775207 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 22:48:12.775216 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 22:48:12.775223 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 22:48:12.775361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 22:48:12.775425 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 22:48:12.775483 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 22:48:12.775539 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 22:48:12.775594 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 22:48:12.775605 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 22:48:12.775612 kernel: PCI host bridge to bus 0000:00 Nov 23 22:48:12.775688 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 22:48:12.775765 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 22:48:12.775820 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 22:48:12.775872 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 22:48:12.775949 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 22:48:12.776021 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 23 22:48:12.776080 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 23 22:48:12.776138 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 23 22:48:12.776207 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 22:48:12.776264 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 22:48:12.776322 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 23 22:48:12.776382 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 23 22:48:12.776433 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 22:48:12.776484 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 22:48:12.776537 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 22:48:12.776546 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 22:48:12.776553 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 22:48:12.776560 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 22:48:12.776567 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 22:48:12.776575 kernel: iommu: Default domain type: Translated Nov 23 22:48:12.776583 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 22:48:12.776590 kernel: efivars: Registered efivars operations Nov 23 22:48:12.776597 kernel: vgaarb: loaded Nov 23 22:48:12.776603 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 22:48:12.776610 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 22:48:12.776617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 22:48:12.776624 kernel: pnp: PnP ACPI init Nov 23 22:48:12.776743 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 22:48:12.776757 kernel: pnp: PnP ACPI: found 1 devices Nov 23 22:48:12.776764 kernel: NET: Registered PF_INET protocol family Nov 23 22:48:12.776772 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 22:48:12.776779 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 22:48:12.776786 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 22:48:12.776793 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 22:48:12.776800 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 22:48:12.776807 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 22:48:12.776816 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:48:12.776823 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:48:12.776830 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 22:48:12.776837 kernel: PCI: CLS 0 bytes, default 64 Nov 23 22:48:12.776844 kernel: kvm [1]: HYP mode not available Nov 23 22:48:12.776851 kernel: Initialise system trusted keyrings Nov 23 22:48:12.776858 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 22:48:12.776865 kernel: Key type asymmetric registered Nov 23 22:48:12.776872 kernel: Asymmetric key parser 'x509' registered Nov 23 22:48:12.776881 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 22:48:12.776888 kernel: io scheduler mq-deadline registered Nov 23 22:48:12.776895 kernel: io scheduler kyber registered Nov 23 22:48:12.776902 kernel: io scheduler bfq registered Nov 23 22:48:12.776910 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 22:48:12.776917 kernel: ACPI: button: Power Button [PWRB] Nov 23 22:48:12.776925 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 22:48:12.776986 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 23 22:48:12.776995 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 22:48:12.777004 kernel: thunder_xcv, ver 1.0 Nov 23 22:48:12.777012 kernel: thunder_bgx, ver 1.0 Nov 23 22:48:12.777018 kernel: nicpf, ver 1.0 Nov 23 22:48:12.777025 kernel: nicvf, ver 1.0 Nov 23 22:48:12.777092 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 22:48:12.777147 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T22:48:12 UTC (1763938092) Nov 23 22:48:12.777165 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 22:48:12.777172 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 22:48:12.777181 kernel: watchdog: NMI not fully supported Nov 23 22:48:12.777188 kernel: watchdog: Hard watchdog permanently disabled Nov 23 22:48:12.777196 kernel: NET: Registered PF_INET6 protocol family Nov 23 22:48:12.777203 kernel: Segment Routing with IPv6 Nov 23 22:48:12.777210 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 22:48:12.777217 kernel: NET: Registered PF_PACKET protocol family Nov 23 22:48:12.777223 kernel: Key type dns_resolver registered Nov 23 22:48:12.777230 kernel: registered taskstats version 1 Nov 23 22:48:12.777237 kernel: Loading compiled-in X.509 certificates Nov 23 22:48:12.777244 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 98b0841f2908e51633cd38699ad12796cadb7bd1' Nov 23 22:48:12.777253 kernel: Demotion targets for Node 0: null Nov 23 22:48:12.777260 kernel: Key type .fscrypt registered Nov 23 22:48:12.777267 kernel: Key type fscrypt-provisioning registered Nov 23 22:48:12.777274 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 22:48:12.777281 kernel: ima: Allocated hash algorithm: sha1 Nov 23 22:48:12.777287 kernel: ima: No architecture policies found Nov 23 22:48:12.777294 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 22:48:12.777301 kernel: clk: Disabling unused clocks Nov 23 22:48:12.777308 kernel: PM: genpd: Disabling unused power domains Nov 23 22:48:12.777317 kernel: Warning: unable to open an initial console. Nov 23 22:48:12.777324 kernel: Freeing unused kernel memory: 39552K Nov 23 22:48:12.777331 kernel: Run /init as init process Nov 23 22:48:12.777338 kernel: with arguments: Nov 23 22:48:12.777344 kernel: /init Nov 23 22:48:12.777351 kernel: with environment: Nov 23 22:48:12.777358 kernel: HOME=/ Nov 23 22:48:12.777365 kernel: TERM=linux Nov 23 22:48:12.777372 systemd[1]: Successfully made /usr/ read-only. Nov 23 22:48:12.777384 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:48:12.777392 systemd[1]: Detected virtualization kvm. Nov 23 22:48:12.777399 systemd[1]: Detected architecture arm64. Nov 23 22:48:12.777406 systemd[1]: Running in initrd. Nov 23 22:48:12.777414 systemd[1]: No hostname configured, using default hostname. Nov 23 22:48:12.777421 systemd[1]: Hostname set to . Nov 23 22:48:12.777429 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:48:12.777437 systemd[1]: Queued start job for default target initrd.target. Nov 23 22:48:12.777445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:48:12.777453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:48:12.777461 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 22:48:12.777469 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:48:12.777477 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 22:48:12.777485 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 22:48:12.777495 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 22:48:12.777502 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 22:48:12.777510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:48:12.777517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:48:12.777525 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:48:12.777533 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:48:12.777540 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:48:12.777548 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:48:12.777557 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:48:12.777564 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:48:12.777572 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 22:48:12.777580 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 22:48:12.777588 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:48:12.777595 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:48:12.777603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:48:12.777610 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:48:12.777618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 22:48:12.777627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:48:12.777635 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 22:48:12.777643 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 22:48:12.777651 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 22:48:12.777659 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:48:12.777667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:48:12.777684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:48:12.777692 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 22:48:12.777703 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:48:12.777711 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 22:48:12.777718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:48:12.777749 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 22:48:12.777770 systemd-journald[245]: Journal started Nov 23 22:48:12.777789 systemd-journald[245]: Runtime Journal (/run/log/journal/e2a0cd5ed4aa4de6bf465107d000d7c4) is 6M, max 48.5M, 42.4M free. Nov 23 22:48:12.785171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 22:48:12.785208 kernel: Bridge firewalling registered Nov 23 22:48:12.785218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:48:12.767086 systemd-modules-load[246]: Inserted module 'overlay' Nov 23 22:48:12.787792 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:48:12.782827 systemd-modules-load[246]: Inserted module 'br_netfilter' Nov 23 22:48:12.790725 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:48:12.792950 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:48:12.796358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 22:48:12.798282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:48:12.800351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:48:12.812777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:48:12.822566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:48:12.824138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:48:12.826143 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 22:48:12.829516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:48:12.833522 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:48:12.834834 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:48:12.837634 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 22:48:12.870351 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:48:12.885423 systemd-resolved[291]: Positive Trust Anchors: Nov 23 22:48:12.885441 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:48:12.885474 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:48:12.893029 systemd-resolved[291]: Defaulting to hostname 'linux'. Nov 23 22:48:12.894188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:48:12.895422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:48:12.956779 kernel: SCSI subsystem initialized Nov 23 22:48:12.961717 kernel: Loading iSCSI transport class v2.0-870. Nov 23 22:48:12.970714 kernel: iscsi: registered transport (tcp) Nov 23 22:48:12.983912 kernel: iscsi: registered transport (qla4xxx) Nov 23 22:48:12.983967 kernel: QLogic iSCSI HBA Driver Nov 23 22:48:13.001849 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:48:13.022595 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:48:13.025049 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:48:13.074264 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 22:48:13.076623 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 22:48:13.137722 kernel: raid6: neonx8 gen() 14479 MB/s Nov 23 22:48:13.154710 kernel: raid6: neonx4 gen() 15465 MB/s Nov 23 22:48:13.171714 kernel: raid6: neonx2 gen() 10114 MB/s Nov 23 22:48:13.188704 kernel: raid6: neonx1 gen() 10122 MB/s Nov 23 22:48:13.205700 kernel: raid6: int64x8 gen() 6750 MB/s Nov 23 22:48:13.222715 kernel: raid6: int64x4 gen() 6483 MB/s Nov 23 22:48:13.239703 kernel: raid6: int64x2 gen() 5942 MB/s Nov 23 22:48:13.256770 kernel: raid6: int64x1 gen() 4343 MB/s Nov 23 22:48:13.256827 kernel: raid6: using algorithm neonx4 gen() 15465 MB/s Nov 23 22:48:13.274727 kernel: raid6: .... xor() 12164 MB/s, rmw enabled Nov 23 22:48:13.274789 kernel: raid6: using neon recovery algorithm Nov 23 22:48:13.280982 kernel: xor: measuring software checksum speed Nov 23 22:48:13.281058 kernel: 8regs : 21567 MB/sec Nov 23 22:48:13.281068 kernel: 32regs : 21271 MB/sec Nov 23 22:48:13.282130 kernel: arm64_neon : 25229 MB/sec Nov 23 22:48:13.282187 kernel: xor: using function: arm64_neon (25229 MB/sec) Nov 23 22:48:13.335730 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 22:48:13.342762 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:48:13.347147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:48:13.376945 systemd-udevd[502]: Using default interface naming scheme 'v255'. Nov 23 22:48:13.381094 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:48:13.383645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 22:48:13.412409 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Nov 23 22:48:13.442622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:48:13.445148 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:48:13.508777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:48:13.511915 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 22:48:13.554911 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 23 22:48:13.555128 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 23 22:48:13.561098 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 22:48:13.561175 kernel: GPT:9289727 != 19775487 Nov 23 22:48:13.561186 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 22:48:13.563090 kernel: GPT:9289727 != 19775487 Nov 23 22:48:13.563124 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 22:48:13.563134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 22:48:13.570672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:48:13.570801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:48:13.581858 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:48:13.584740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:48:13.606058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 23 22:48:13.612597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:48:13.619411 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 22:48:13.629236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 23 22:48:13.636727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 22:48:13.642923 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 23 22:48:13.644049 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 23 22:48:13.647505 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:48:13.649565 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:48:13.651620 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:48:13.655460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 22:48:13.657309 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 22:48:13.677977 disk-uuid[595]: Primary Header is updated. Nov 23 22:48:13.677977 disk-uuid[595]: Secondary Entries is updated. Nov 23 22:48:13.677977 disk-uuid[595]: Secondary Header is updated. Nov 23 22:48:13.680826 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:48:13.684688 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 22:48:13.688699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 22:48:14.689706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 22:48:14.690137 disk-uuid[600]: The operation has completed successfully. Nov 23 22:48:14.732700 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 22:48:14.732802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 22:48:14.762183 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 22:48:14.791902 sh[615]: Success Nov 23 22:48:14.805596 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 22:48:14.805650 kernel: device-mapper: uevent: version 1.0.3 Nov 23 22:48:14.805662 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 22:48:14.812759 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 22:48:14.841728 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 22:48:14.843827 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 22:48:14.863795 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 22:48:14.871478 kernel: BTRFS: device fsid 9fed50bd-c943-4402-9e9a-f39625143eb9 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (627) Nov 23 22:48:14.871532 kernel: BTRFS info (device dm-0): first mount of filesystem 9fed50bd-c943-4402-9e9a-f39625143eb9 Nov 23 22:48:14.871543 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:48:14.876694 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 22:48:14.876758 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 22:48:14.877624 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 22:48:14.878910 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:48:14.880205 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 22:48:14.881080 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 22:48:14.884242 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 22:48:14.909070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Nov 23 22:48:14.909117 kernel: BTRFS info (device vda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:48:14.911108 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:48:14.914913 kernel: BTRFS info (device vda6): turning on async discard Nov 23 22:48:14.914974 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 22:48:14.919698 kernel: BTRFS info (device vda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:48:14.920644 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 22:48:14.922647 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 22:48:14.990964 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:48:14.995842 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:48:15.035157 ignition[709]: Ignition 2.22.0 Nov 23 22:48:15.035173 ignition[709]: Stage: fetch-offline Nov 23 22:48:15.035205 ignition[709]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:48:15.035212 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 22:48:15.035301 ignition[709]: parsed url from cmdline: "" Nov 23 22:48:15.035304 ignition[709]: no config URL provided Nov 23 22:48:15.035308 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:48:15.035315 ignition[709]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:48:15.039954 systemd-networkd[806]: lo: Link UP Nov 23 22:48:15.035336 ignition[709]: op(1): [started] loading QEMU firmware config module Nov 23 22:48:15.039958 systemd-networkd[806]: lo: Gained carrier Nov 23 22:48:15.035340 ignition[709]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 23 22:48:15.040852 systemd-networkd[806]: Enumeration completed Nov 23 22:48:15.041171 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:48:15.041325 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:48:15.047439 ignition[709]: op(1): [finished] loading QEMU firmware config module Nov 23 22:48:15.041329 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:48:15.042054 systemd-networkd[806]: eth0: Link UP Nov 23 22:48:15.042207 systemd-networkd[806]: eth0: Gained carrier Nov 23 22:48:15.042217 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:48:15.042538 systemd[1]: Reached target network.target - Network. Nov 23 22:48:15.062735 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 22:48:15.097258 ignition[709]: parsing config with SHA512: f85ce90bfc6f862feb98906520754d57ec55bdd442bf8471e4b2b698693587cb1493d0ca3118cf4e56e4188c6ec80e609ede9626e7e48847204a3750b6f3809f Nov 23 22:48:15.103911 unknown[709]: fetched base config from "system" Nov 23 22:48:15.103925 unknown[709]: fetched user config from "qemu" Nov 23 22:48:15.104409 ignition[709]: fetch-offline: fetch-offline passed Nov 23 22:48:15.104473 ignition[709]: Ignition finished successfully Nov 23 22:48:15.106688 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:48:15.110013 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 23 22:48:15.110836 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 22:48:15.147425 ignition[814]: Ignition 2.22.0 Nov 23 22:48:15.147443 ignition[814]: Stage: kargs Nov 23 22:48:15.147592 ignition[814]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:48:15.147600 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 22:48:15.148369 ignition[814]: kargs: kargs passed Nov 23 22:48:15.152275 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 22:48:15.148424 ignition[814]: Ignition finished successfully Nov 23 22:48:15.154270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 22:48:15.187958 ignition[822]: Ignition 2.22.0 Nov 23 22:48:15.187975 ignition[822]: Stage: disks Nov 23 22:48:15.188116 ignition[822]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:48:15.188125 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 22:48:15.190834 ignition[822]: disks: disks passed Nov 23 22:48:15.190889 ignition[822]: Ignition finished successfully Nov 23 22:48:15.192492 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 22:48:15.193629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 22:48:15.195169 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 22:48:15.197025 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:48:15.198692 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:48:15.200352 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:48:15.202824 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 22:48:15.234368 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 22:48:15.239249 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 22:48:15.242081 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 22:48:15.320705 kernel: EXT4-fs (vda9): mounted filesystem c70a3a7b-80c4-4387-ab29-1bf940859b86 r/w with ordered data mode. Quota mode: none. Nov 23 22:48:15.320873 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 22:48:15.322180 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 22:48:15.325088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:48:15.328623 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 22:48:15.329750 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 22:48:15.329796 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 22:48:15.329822 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:48:15.343644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 22:48:15.345851 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 22:48:15.352704 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Nov 23 22:48:15.356580 kernel: BTRFS info (device vda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:48:15.356627 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:48:15.359951 kernel: BTRFS info (device vda6): turning on async discard Nov 23 22:48:15.360006 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 22:48:15.361874 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:48:15.393029 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 22:48:15.397810 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Nov 23 22:48:15.403017 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 22:48:15.406760 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 22:48:15.487916 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 22:48:15.489924 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 22:48:15.491531 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 22:48:15.508733 kernel: BTRFS info (device vda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:48:15.530858 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 22:48:15.547722 ignition[956]: INFO : Ignition 2.22.0 Nov 23 22:48:15.547722 ignition[956]: INFO : Stage: mount Nov 23 22:48:15.549366 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:48:15.549366 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 22:48:15.549366 ignition[956]: INFO : mount: mount passed Nov 23 22:48:15.549366 ignition[956]: INFO : Ignition finished successfully Nov 23 22:48:15.550397 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 22:48:15.553410 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 22:48:15.869491 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 22:48:15.870986 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:48:15.890607 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Nov 23 22:48:15.890668 kernel: BTRFS info (device vda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:48:15.890688 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:48:15.895105 kernel: BTRFS info (device vda6): turning on async discard Nov 23 22:48:15.895167 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 22:48:15.896582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:48:15.947703 ignition[984]: INFO : Ignition 2.22.0 Nov 23 22:48:15.947703 ignition[984]: INFO : Stage: files Nov 23 22:48:15.947703 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:48:15.947703 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 22:48:15.951707 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Nov 23 22:48:15.951707 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 22:48:15.951707 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 22:48:15.955229 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 22:48:15.955229 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 22:48:15.955229 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 22:48:15.954880 unknown[984]: wrote ssh authorized keys file for user: core Nov 23 22:48:15.959753 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:48:15.959753 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 22:48:15.994918 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 22:48:16.102432 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:48:16.102432 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 22:48:16.107219 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 22:48:16.107219 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:48:16.107219 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:48:16.107219 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:48:16.107219 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:48:16.107219 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:48:16.107219 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:48:16.172713 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:48:16.174994 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:48:16.174994 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:48:16.182111 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:48:16.182111 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:48:16.182111 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 23 22:48:16.463355 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 22:48:16.803021 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:48:16.803021 ignition[984]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 22:48:16.807214 ignition[984]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:48:16.810951 systemd-networkd[806]: eth0: Gained IPv6LL Nov 23 22:48:16.811876 ignition[984]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:48:16.811876 ignition[984]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 22:48:16.811876 ignition[984]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 22:48:16.811876 ignition[984]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 22:48:16.811876 ignition[984]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 22:48:16.811876 ignition[984]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 22:48:16.811876 ignition[984]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 23 22:48:16.829159 ignition[984]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 22:48:16.833608 ignition[984]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 22:48:16.836671 ignition[984]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 23 22:48:16.836671 ignition[984]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 23 22:48:16.836671 ignition[984]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 22:48:16.836671 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:48:16.836671 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:48:16.836671 ignition[984]: INFO : files: files passed Nov 23 22:48:16.836671 ignition[984]: INFO : Ignition finished successfully Nov 23 22:48:16.837226 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 22:48:16.840197 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 22:48:16.842023 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 22:48:16.860332 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 22:48:16.860568 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 22:48:16.864305 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Nov 23 22:48:16.865875 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:48:16.865875 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:48:16.869539 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:48:16.867732 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:48:16.870903 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 22:48:16.874061 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 22:48:16.933970 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 22:48:16.934102 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 22:48:16.936270 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 22:48:16.938237 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 22:48:16.939915 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 22:48:16.941070 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 22:48:16.971728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:48:16.974605 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 22:48:16.996612 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:48:16.997841 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:48:16.999748 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 22:48:17.001495 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 22:48:17.001629 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:48:17.003952 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 22:48:17.006120 systemd[1]: Stopped target basic.target - Basic System. Nov 23 22:48:17.007998 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 22:48:17.009733 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:48:17.011881 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 22:48:17.014184 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:48:17.016094 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 22:48:17.017886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:48:17.019775 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 22:48:17.021611 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 22:48:17.023399 systemd[1]: Stopped target swap.target - Swaps. Nov 23 22:48:17.024811 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 22:48:17.024952 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:48:17.027101 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:48:17.029076 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:48:17.031048 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 22:48:17.031166 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:48:17.033025 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 22:48:17.033169 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 22:48:17.036087 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 22:48:17.036221 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:48:17.038038 systemd[1]: Stopped target paths.target - Path Units. Nov 23 22:48:17.039525 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 22:48:17.042757 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:48:17.044277 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 22:48:17.046197 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 22:48:17.047582 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 22:48:17.047694 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:48:17.049122 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 22:48:17.049215 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:48:17.050620 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 22:48:17.050757 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:48:17.052379 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 22:48:17.052487 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 22:48:17.054708 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 22:48:17.056547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 22:48:17.056693 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:48:17.074312 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 22:48:17.075159 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 22:48:17.075293 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:48:17.077096 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 22:48:17.077213 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:48:17.082804 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 22:48:17.082901 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 22:48:17.090831 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 22:48:17.091718 ignition[1040]: INFO : Ignition 2.22.0 Nov 23 22:48:17.091718 ignition[1040]: INFO : Stage: umount Nov 23 22:48:17.091718 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:48:17.091718 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 22:48:17.097630 ignition[1040]: INFO : umount: umount passed Nov 23 22:48:17.097630 ignition[1040]: INFO : Ignition finished successfully Nov 23 22:48:17.094988 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 22:48:17.095092 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 22:48:17.096727 systemd[1]: Stopped target network.target - Network. Nov 23 22:48:17.098450 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 22:48:17.098522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 22:48:17.100237 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 22:48:17.100281 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 22:48:17.101727 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 22:48:17.101773 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 22:48:17.103826 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 22:48:17.103868 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 22:48:17.105862 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 22:48:17.107545 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 22:48:17.109912 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 22:48:17.110002 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 22:48:17.111803 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 22:48:17.111891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 22:48:17.118447 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 22:48:17.118567 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 22:48:17.122408 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 22:48:17.123027 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 22:48:17.123111 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:48:17.126714 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:48:17.126952 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 22:48:17.127047 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 22:48:17.130042 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 22:48:17.130502 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 22:48:17.132293 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 22:48:17.132336 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:48:17.135094 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 22:48:17.135983 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 22:48:17.136049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:48:17.137760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 22:48:17.137806 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:48:17.140482 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 22:48:17.140524 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 22:48:17.142536 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:48:17.145616 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 22:48:17.162345 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 22:48:17.165843 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:48:17.167378 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 22:48:17.167479 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 22:48:17.170258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 22:48:17.170325 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 22:48:17.172288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 22:48:17.172331 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:48:17.174568 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 22:48:17.174631 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:48:17.177760 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 22:48:17.177824 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 22:48:17.180558 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 22:48:17.180625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:48:17.184311 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 22:48:17.185426 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 22:48:17.185510 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:48:17.188906 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 22:48:17.188952 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:48:17.193538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:48:17.193594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:48:17.201218 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 22:48:17.201314 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 22:48:17.203612 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 22:48:17.206409 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 22:48:17.236158 systemd[1]: Switching root. Nov 23 22:48:17.262167 systemd-journald[245]: Journal stopped Nov 23 22:48:18.211768 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 22:48:18.211822 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 22:48:18.211835 kernel: SELinux: policy capability open_perms=1 Nov 23 22:48:18.211848 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 22:48:18.211866 kernel: SELinux: policy capability always_check_network=0 Nov 23 22:48:18.211876 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 22:48:18.211887 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 22:48:18.211896 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 22:48:18.211905 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 22:48:18.211914 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 22:48:18.211924 kernel: audit: type=1403 audit(1763938097.536:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 22:48:18.211935 systemd[1]: Successfully loaded SELinux policy in 72.233ms. Nov 23 22:48:18.211953 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.347ms. Nov 23 22:48:18.211968 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:48:18.211979 systemd[1]: Detected virtualization kvm. Nov 23 22:48:18.211990 systemd[1]: Detected architecture arm64. Nov 23 22:48:18.212000 systemd[1]: Detected first boot. Nov 23 22:48:18.212010 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:48:18.212021 zram_generator::config[1087]: No configuration found. Nov 23 22:48:18.212031 kernel: NET: Registered PF_VSOCK protocol family Nov 23 22:48:18.212042 systemd[1]: Populated /etc with preset unit settings. Nov 23 22:48:18.212053 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 22:48:18.212063 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 22:48:18.212075 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 22:48:18.212085 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 22:48:18.212095 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 22:48:18.212106 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 22:48:18.212116 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 22:48:18.212141 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 22:48:18.212153 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 22:48:18.212163 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 22:48:18.212173 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 22:48:18.212185 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 22:48:18.212195 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:48:18.212205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:48:18.212215 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 22:48:18.212225 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 22:48:18.212236 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 22:48:18.212246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:48:18.212256 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 22:48:18.212267 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:48:18.212278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:48:18.212289 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 22:48:18.212299 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 22:48:18.212310 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 22:48:18.212320 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 22:48:18.212330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:48:18.212340 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:48:18.212350 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:48:18.212362 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:48:18.212372 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 22:48:18.212382 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 22:48:18.212392 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 22:48:18.212403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:48:18.212413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:48:18.212423 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:48:18.212433 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 22:48:18.212444 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 22:48:18.212455 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 22:48:18.212465 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 22:48:18.212475 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 22:48:18.212485 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 22:48:18.212495 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 22:48:18.212506 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 22:48:18.212516 systemd[1]: Reached target machines.target - Containers. Nov 23 22:48:18.212527 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 22:48:18.212539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:48:18.212549 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:48:18.212560 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 22:48:18.212570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:48:18.212581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:48:18.212591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:48:18.212601 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 22:48:18.212611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:48:18.212622 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 22:48:18.212633 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 22:48:18.212644 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 22:48:18.212654 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 22:48:18.212664 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 22:48:18.212690 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:48:18.212702 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:48:18.212712 kernel: fuse: init (API version 7.41) Nov 23 22:48:18.212721 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:48:18.212733 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:48:18.212744 kernel: loop: module loaded Nov 23 22:48:18.212753 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 22:48:18.212764 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 22:48:18.212774 kernel: ACPI: bus type drm_connector registered Nov 23 22:48:18.212783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:48:18.212795 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 22:48:18.212805 systemd[1]: Stopped verity-setup.service. Nov 23 22:48:18.212816 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 22:48:18.212825 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 22:48:18.212835 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 22:48:18.212845 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 22:48:18.212855 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 22:48:18.212883 systemd-journald[1159]: Collecting audit messages is disabled. Nov 23 22:48:18.212904 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 22:48:18.212916 systemd-journald[1159]: Journal started Nov 23 22:48:18.212938 systemd-journald[1159]: Runtime Journal (/run/log/journal/e2a0cd5ed4aa4de6bf465107d000d7c4) is 6M, max 48.5M, 42.4M free. Nov 23 22:48:17.987418 systemd[1]: Queued start job for default target multi-user.target. Nov 23 22:48:17.999704 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 23 22:48:18.000096 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 22:48:18.215210 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:48:18.216041 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 22:48:18.217478 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:48:18.219003 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 22:48:18.219172 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 22:48:18.220567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:48:18.220769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:48:18.222018 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:48:18.222190 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:48:18.223599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:48:18.223767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:48:18.225224 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 22:48:18.225393 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 22:48:18.226669 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:48:18.226847 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:48:18.228137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:48:18.229479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:48:18.230992 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 22:48:18.232400 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 22:48:18.243366 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:48:18.245657 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 22:48:18.247621 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 22:48:18.248819 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 22:48:18.248862 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:48:18.250631 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 22:48:18.259599 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 22:48:18.260855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:48:18.262228 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 22:48:18.264285 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 22:48:18.265515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:48:18.266558 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 22:48:18.267720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:48:18.268887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:48:18.270863 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 22:48:18.273046 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 22:48:18.274904 systemd-journald[1159]: Time spent on flushing to /var/log/journal/e2a0cd5ed4aa4de6bf465107d000d7c4 is 21.653ms for 884 entries. Nov 23 22:48:18.274904 systemd-journald[1159]: System Journal (/var/log/journal/e2a0cd5ed4aa4de6bf465107d000d7c4) is 8M, max 195.6M, 187.6M free. Nov 23 22:48:18.308107 systemd-journald[1159]: Received client request to flush runtime journal. Nov 23 22:48:18.308160 kernel: loop0: detected capacity change from 0 to 100632 Nov 23 22:48:18.308174 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 22:48:18.276908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:48:18.279139 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 22:48:18.280871 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 22:48:18.285184 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 22:48:18.288767 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 22:48:18.293965 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 22:48:18.305712 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 22:48:18.310813 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:48:18.313431 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 22:48:18.316237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:48:18.324699 kernel: loop1: detected capacity change from 0 to 119840 Nov 23 22:48:18.328714 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 22:48:18.330589 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Nov 23 22:48:18.330603 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Nov 23 22:48:18.334732 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:48:18.353706 kernel: loop2: detected capacity change from 0 to 200800 Nov 23 22:48:18.386717 kernel: loop3: detected capacity change from 0 to 100632 Nov 23 22:48:18.392704 kernel: loop4: detected capacity change from 0 to 119840 Nov 23 22:48:18.397704 kernel: loop5: detected capacity change from 0 to 200800 Nov 23 22:48:18.402266 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 23 22:48:18.402640 (sd-merge)[1226]: Merged extensions into '/usr'. Nov 23 22:48:18.407493 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 22:48:18.407514 systemd[1]: Reloading... Nov 23 22:48:18.452786 zram_generator::config[1248]: No configuration found. Nov 23 22:48:18.533179 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 22:48:18.619285 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 22:48:18.619417 systemd[1]: Reloading finished in 211 ms. Nov 23 22:48:18.653482 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 22:48:18.657014 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 22:48:18.673052 systemd[1]: Starting ensure-sysext.service... Nov 23 22:48:18.676409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:48:18.685333 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Nov 23 22:48:18.685351 systemd[1]: Reloading... Nov 23 22:48:18.689748 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 22:48:18.689786 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 22:48:18.690024 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 22:48:18.690233 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 22:48:18.690855 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 22:48:18.691056 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 23 22:48:18.691103 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 23 22:48:18.694023 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:48:18.694030 systemd-tmpfiles[1287]: Skipping /boot Nov 23 22:48:18.702071 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:48:18.702085 systemd-tmpfiles[1287]: Skipping /boot Nov 23 22:48:18.742699 zram_generator::config[1323]: No configuration found. Nov 23 22:48:18.867386 systemd[1]: Reloading finished in 181 ms. Nov 23 22:48:18.886420 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 22:48:18.892317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:48:18.908815 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:48:18.911434 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 22:48:18.913781 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 22:48:18.916836 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:48:18.920501 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:48:18.925048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 22:48:18.930254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:48:18.933925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:48:18.941657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:48:18.944798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:48:18.945984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:48:18.946116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:48:18.947917 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 22:48:18.951341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:48:18.952745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:48:18.954557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:48:18.954740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:48:18.956800 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:48:18.956960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:48:18.958941 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 22:48:18.967208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:48:18.969961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:48:18.972216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:48:18.974093 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Nov 23 22:48:18.974858 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:48:18.976089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:48:18.976288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:48:18.980058 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 22:48:18.980423 augenrules[1387]: No rules Nov 23 22:48:18.982990 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 22:48:18.985288 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:48:18.997911 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:48:18.999903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:48:19.006060 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 22:48:19.007905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:48:19.008102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:48:19.009777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:48:19.009946 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:48:19.014647 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:48:19.014848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:48:19.020592 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 22:48:19.034349 systemd[1]: Finished ensure-sysext.service. Nov 23 22:48:19.038635 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 22:48:19.044518 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:48:19.047947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:48:19.049019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:48:19.052689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:48:19.054911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:48:19.057850 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:48:19.058993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:48:19.059043 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:48:19.065499 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:48:19.069468 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 22:48:19.071745 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 22:48:19.072391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:48:19.072577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:48:19.073988 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:48:19.074162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:48:19.075929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:48:19.076193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:48:19.078160 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:48:19.078332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:48:19.083373 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:48:19.083445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:48:19.086303 augenrules[1433]: /sbin/augenrules: No change Nov 23 22:48:19.098820 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 22:48:19.099915 augenrules[1461]: No rules Nov 23 22:48:19.102024 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:48:19.102320 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:48:19.166608 systemd-resolved[1353]: Positive Trust Anchors: Nov 23 22:48:19.166628 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:48:19.166660 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:48:19.174150 systemd-resolved[1353]: Defaulting to hostname 'linux'. Nov 23 22:48:19.176796 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:48:19.178928 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:48:19.189175 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 22:48:19.193103 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 22:48:19.196764 systemd-networkd[1438]: lo: Link UP Nov 23 22:48:19.196771 systemd-networkd[1438]: lo: Gained carrier Nov 23 22:48:19.197622 systemd-networkd[1438]: Enumeration completed Nov 23 22:48:19.197771 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:48:19.198084 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:48:19.198093 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:48:19.199059 systemd-networkd[1438]: eth0: Link UP Nov 23 22:48:19.199193 systemd-networkd[1438]: eth0: Gained carrier Nov 23 22:48:19.199214 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:48:19.199882 systemd[1]: Reached target network.target - Network. Nov 23 22:48:19.207737 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 22:48:19.210500 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 22:48:19.211963 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 22:48:19.214333 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 22:48:19.215048 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:48:19.216195 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 22:48:19.216262 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Nov 23 22:48:19.217310 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 22:48:19.218731 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 22:48:19.679714 systemd-resolved[1353]: Clock change detected. Flushing caches. Nov 23 22:48:19.679822 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 23 22:48:19.679876 systemd-timesyncd[1441]: Initial clock synchronization to Sun 2025-11-23 22:48:19.679664 UTC. Nov 23 22:48:19.680793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 22:48:19.680819 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:48:19.681658 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 22:48:19.682777 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 22:48:19.683817 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 22:48:19.685021 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:48:19.686819 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 22:48:19.693772 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 22:48:19.696997 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 22:48:19.698898 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 22:48:19.700086 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 22:48:19.714428 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 22:48:19.715924 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 22:48:19.719692 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 22:48:19.721290 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 22:48:19.722714 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 22:48:19.726977 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:48:19.728048 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:48:19.729104 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:48:19.729137 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:48:19.732649 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 22:48:19.736745 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 22:48:19.739058 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 22:48:19.749316 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 22:48:19.751665 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 22:48:19.752700 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 22:48:19.754122 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 22:48:19.757021 jq[1501]: false Nov 23 22:48:19.756590 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 22:48:19.758671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 22:48:19.761677 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 22:48:19.765121 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 22:48:19.767183 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 22:48:19.769612 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 22:48:19.771263 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 22:48:19.771979 extend-filesystems[1502]: Found /dev/vda6 Nov 23 22:48:19.774291 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 22:48:19.777324 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 22:48:19.779029 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 22:48:19.779205 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 22:48:19.779440 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 22:48:19.780141 extend-filesystems[1502]: Found /dev/vda9 Nov 23 22:48:19.780704 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 22:48:19.782847 extend-filesystems[1502]: Checking size of /dev/vda9 Nov 23 22:48:19.783529 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 22:48:19.783712 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 22:48:19.787913 jq[1519]: true Nov 23 22:48:19.796785 update_engine[1516]: I20251123 22:48:19.796474 1516 main.cc:92] Flatcar Update Engine starting Nov 23 22:48:19.803923 extend-filesystems[1502]: Resized partition /dev/vda9 Nov 23 22:48:19.807615 extend-filesystems[1539]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 22:48:19.806743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:48:19.811556 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 23 22:48:19.812063 jq[1526]: true Nov 23 22:48:19.817025 tar[1524]: linux-arm64/LICENSE Nov 23 22:48:19.817983 tar[1524]: linux-arm64/helm Nov 23 22:48:19.820986 (ntainerd)[1537]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 22:48:19.830862 dbus-daemon[1499]: [system] SELinux support is enabled Nov 23 22:48:19.831043 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 22:48:19.834404 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 22:48:19.834435 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 22:48:19.836242 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 22:48:19.836266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 22:48:19.846613 systemd[1]: Started update-engine.service - Update Engine. Nov 23 22:48:19.847067 update_engine[1516]: I20251123 22:48:19.846726 1516 update_check_scheduler.cc:74] Next update check in 9m17s Nov 23 22:48:19.849440 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 22:48:19.855539 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 23 22:48:19.872840 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 23 22:48:19.872840 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 23 22:48:19.872840 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 23 22:48:19.876323 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 22:48:19.877983 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Nov 23 22:48:19.876597 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 22:48:19.878223 systemd-logind[1512]: New seat seat0. Nov 23 22:48:19.879567 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 22:48:19.887702 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:48:19.923300 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 22:48:19.925805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:48:19.927320 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 22:48:19.934837 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 22:48:19.942793 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 22:48:20.020353 containerd[1537]: time="2025-11-23T22:48:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 22:48:20.021284 containerd[1537]: time="2025-11-23T22:48:20.021248146Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 22:48:20.030383 containerd[1537]: time="2025-11-23T22:48:20.030330706Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.88µs" Nov 23 22:48:20.030383 containerd[1537]: time="2025-11-23T22:48:20.030370426Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 22:48:20.030383 containerd[1537]: time="2025-11-23T22:48:20.030389866Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030569506Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030591506Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030617666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030668106Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030679506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030904226Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030919346Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030929786Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.030937546Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.031022746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032519 containerd[1537]: time="2025-11-23T22:48:20.031208226Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032744 containerd[1537]: time="2025-11-23T22:48:20.031234866Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:48:20.032744 containerd[1537]: time="2025-11-23T22:48:20.031244746Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 22:48:20.032744 containerd[1537]: time="2025-11-23T22:48:20.031289306Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 22:48:20.032744 containerd[1537]: time="2025-11-23T22:48:20.031497426Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 22:48:20.032744 containerd[1537]: time="2025-11-23T22:48:20.031587626Z" level=info msg="metadata content store policy set" policy=shared Nov 23 22:48:20.147116 tar[1524]: linux-arm64/README.md Nov 23 22:48:20.164582 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 22:48:20.208184 containerd[1537]: time="2025-11-23T22:48:20.208112466Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 22:48:20.208264 containerd[1537]: time="2025-11-23T22:48:20.208206506Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 22:48:20.208264 containerd[1537]: time="2025-11-23T22:48:20.208224026Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 22:48:20.208264 containerd[1537]: time="2025-11-23T22:48:20.208237586Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 22:48:20.208395 containerd[1537]: time="2025-11-23T22:48:20.208362786Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 22:48:20.208395 containerd[1537]: time="2025-11-23T22:48:20.208386186Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 22:48:20.208439 containerd[1537]: time="2025-11-23T22:48:20.208399666Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 22:48:20.208439 containerd[1537]: time="2025-11-23T22:48:20.208419226Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 22:48:20.208439 containerd[1537]: time="2025-11-23T22:48:20.208431706Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 22:48:20.208491 containerd[1537]: time="2025-11-23T22:48:20.208442346Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 22:48:20.208491 containerd[1537]: time="2025-11-23T22:48:20.208452386Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 22:48:20.208491 containerd[1537]: time="2025-11-23T22:48:20.208465266Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 22:48:20.208688 containerd[1537]: time="2025-11-23T22:48:20.208649746Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 22:48:20.208688 containerd[1537]: time="2025-11-23T22:48:20.208683386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 22:48:20.208731 containerd[1537]: time="2025-11-23T22:48:20.208699666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 22:48:20.208731 containerd[1537]: time="2025-11-23T22:48:20.208711986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 22:48:20.208731 containerd[1537]: time="2025-11-23T22:48:20.208723426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 22:48:20.208789 containerd[1537]: time="2025-11-23T22:48:20.208733306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 22:48:20.208789 containerd[1537]: time="2025-11-23T22:48:20.208744106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 22:48:20.208789 containerd[1537]: time="2025-11-23T22:48:20.208753586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 22:48:20.208789 containerd[1537]: time="2025-11-23T22:48:20.208764346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 22:48:20.208789 containerd[1537]: time="2025-11-23T22:48:20.208774426Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 22:48:20.208789 containerd[1537]: time="2025-11-23T22:48:20.208785306Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 22:48:20.209013 containerd[1537]: time="2025-11-23T22:48:20.208981466Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 22:48:20.209013 containerd[1537]: time="2025-11-23T22:48:20.209004786Z" level=info msg="Start snapshots syncer" Nov 23 22:48:20.209056 containerd[1537]: time="2025-11-23T22:48:20.209032386Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 22:48:20.209368 containerd[1537]: time="2025-11-23T22:48:20.209314186Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 22:48:20.209459 containerd[1537]: time="2025-11-23T22:48:20.209376266Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 22:48:20.209459 containerd[1537]: time="2025-11-23T22:48:20.209428146Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 22:48:20.209603 containerd[1537]: time="2025-11-23T22:48:20.209583186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 22:48:20.209643 containerd[1537]: time="2025-11-23T22:48:20.209629946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 22:48:20.209671 containerd[1537]: time="2025-11-23T22:48:20.209646386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 22:48:20.209671 containerd[1537]: time="2025-11-23T22:48:20.209658986Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 22:48:20.209713 containerd[1537]: time="2025-11-23T22:48:20.209671866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 22:48:20.209713 containerd[1537]: time="2025-11-23T22:48:20.209682546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 22:48:20.209713 containerd[1537]: time="2025-11-23T22:48:20.209693226Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 22:48:20.209760 containerd[1537]: time="2025-11-23T22:48:20.209718346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 22:48:20.209760 containerd[1537]: time="2025-11-23T22:48:20.209730986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 22:48:20.209760 containerd[1537]: time="2025-11-23T22:48:20.209742346Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 22:48:20.209811 containerd[1537]: time="2025-11-23T22:48:20.209780026Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:48:20.209829 containerd[1537]: time="2025-11-23T22:48:20.209814546Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:48:20.209829 containerd[1537]: time="2025-11-23T22:48:20.209826226Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:48:20.209863 containerd[1537]: time="2025-11-23T22:48:20.209836226Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:48:20.209863 containerd[1537]: time="2025-11-23T22:48:20.209844306Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 22:48:20.209863 containerd[1537]: time="2025-11-23T22:48:20.209853666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 22:48:20.209913 containerd[1537]: time="2025-11-23T22:48:20.209863786Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 22:48:20.209963 containerd[1537]: time="2025-11-23T22:48:20.209942346Z" level=info msg="runtime interface created" Nov 23 22:48:20.209963 containerd[1537]: time="2025-11-23T22:48:20.209951346Z" level=info msg="created NRI interface" Nov 23 22:48:20.210001 containerd[1537]: time="2025-11-23T22:48:20.209969586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 22:48:20.210001 containerd[1537]: time="2025-11-23T22:48:20.209982746Z" level=info msg="Connect containerd service" Nov 23 22:48:20.210038 containerd[1537]: time="2025-11-23T22:48:20.210005186Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 22:48:20.210865 containerd[1537]: time="2025-11-23T22:48:20.210814146Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:48:20.284094 containerd[1537]: time="2025-11-23T22:48:20.283987066Z" level=info msg="Start subscribing containerd event" Nov 23 22:48:20.284196 containerd[1537]: time="2025-11-23T22:48:20.284101266Z" level=info msg="Start recovering state" Nov 23 22:48:20.284619 containerd[1537]: time="2025-11-23T22:48:20.284374106Z" level=info msg="Start event monitor" Nov 23 22:48:20.284665 containerd[1537]: time="2025-11-23T22:48:20.284626146Z" level=info msg="Start cni network conf syncer for default" Nov 23 22:48:20.284665 containerd[1537]: time="2025-11-23T22:48:20.284636946Z" level=info msg="Start streaming server" Nov 23 22:48:20.284665 containerd[1537]: time="2025-11-23T22:48:20.284661786Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 22:48:20.284750 containerd[1537]: time="2025-11-23T22:48:20.284677546Z" level=info msg="runtime interface starting up..." Nov 23 22:48:20.284750 containerd[1537]: time="2025-11-23T22:48:20.284684786Z" level=info msg="starting plugins..." Nov 23 22:48:20.284750 containerd[1537]: time="2025-11-23T22:48:20.284706466Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 22:48:20.284880 containerd[1537]: time="2025-11-23T22:48:20.284856426Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 22:48:20.284946 containerd[1537]: time="2025-11-23T22:48:20.284932386Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 22:48:20.285125 containerd[1537]: time="2025-11-23T22:48:20.285096866Z" level=info msg="containerd successfully booted in 0.265151s" Nov 23 22:48:20.285227 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 22:48:20.896034 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 22:48:20.915599 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 22:48:20.918791 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 22:48:20.944099 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 22:48:20.944310 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 22:48:20.947006 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 22:48:20.966667 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 22:48:20.969359 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 22:48:20.971693 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 22:48:20.973092 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 22:48:21.687728 systemd-networkd[1438]: eth0: Gained IPv6LL Nov 23 22:48:21.691516 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 22:48:21.693153 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 22:48:21.695554 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 23 22:48:21.698419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:48:21.707377 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 22:48:21.733884 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 22:48:21.737125 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 23 22:48:21.738577 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 23 22:48:21.741590 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 22:48:22.300805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:22.302482 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 22:48:22.305206 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:48:22.305470 systemd[1]: Startup finished in 2.109s (kernel) + 4.908s (initrd) + 4.383s (userspace) = 11.400s. Nov 23 22:48:22.636275 kubelet[1639]: E1123 22:48:22.636151 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:48:22.639641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:48:22.639786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:48:22.641610 systemd[1]: kubelet.service: Consumed 702ms CPU time, 248.3M memory peak. Nov 23 22:48:26.266035 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 22:48:26.267175 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:60736.service - OpenSSH per-connection server daemon (10.0.0.1:60736). Nov 23 22:48:26.346178 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 60736 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:48:26.348159 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:48:26.354564 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 22:48:26.355495 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 22:48:26.361201 systemd-logind[1512]: New session 1 of user core. Nov 23 22:48:26.379589 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 22:48:26.382602 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 22:48:26.402590 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 22:48:26.404837 systemd-logind[1512]: New session c1 of user core. Nov 23 22:48:26.499446 systemd[1657]: Queued start job for default target default.target. Nov 23 22:48:26.523576 systemd[1657]: Created slice app.slice - User Application Slice. Nov 23 22:48:26.523607 systemd[1657]: Reached target paths.target - Paths. Nov 23 22:48:26.523647 systemd[1657]: Reached target timers.target - Timers. Nov 23 22:48:26.524890 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 22:48:26.534665 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 22:48:26.534735 systemd[1657]: Reached target sockets.target - Sockets. Nov 23 22:48:26.534774 systemd[1657]: Reached target basic.target - Basic System. Nov 23 22:48:26.534801 systemd[1657]: Reached target default.target - Main User Target. Nov 23 22:48:26.534832 systemd[1657]: Startup finished in 123ms. Nov 23 22:48:26.534945 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 22:48:26.536438 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 22:48:26.605763 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:60750.service - OpenSSH per-connection server daemon (10.0.0.1:60750). Nov 23 22:48:26.659230 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 60750 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:48:26.660780 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:48:26.664776 systemd-logind[1512]: New session 2 of user core. Nov 23 22:48:26.680710 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 22:48:26.731800 sshd[1671]: Connection closed by 10.0.0.1 port 60750 Nov 23 22:48:26.732153 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Nov 23 22:48:26.743585 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:60750.service: Deactivated successfully. Nov 23 22:48:26.745910 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 22:48:26.746871 systemd-logind[1512]: Session 2 logged out. Waiting for processes to exit. Nov 23 22:48:26.749594 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:60760.service - OpenSSH per-connection server daemon (10.0.0.1:60760). Nov 23 22:48:26.750532 systemd-logind[1512]: Removed session 2. Nov 23 22:48:26.798425 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 60760 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:48:26.799590 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:48:26.803756 systemd-logind[1512]: New session 3 of user core. Nov 23 22:48:26.817742 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 22:48:26.865672 sshd[1680]: Connection closed by 10.0.0.1 port 60760 Nov 23 22:48:26.866064 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Nov 23 22:48:26.875673 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:60760.service: Deactivated successfully. Nov 23 22:48:26.877425 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 22:48:26.879052 systemd-logind[1512]: Session 3 logged out. Waiting for processes to exit. Nov 23 22:48:26.881075 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:60776.service - OpenSSH per-connection server daemon (10.0.0.1:60776). Nov 23 22:48:26.881577 systemd-logind[1512]: Removed session 3. Nov 23 22:48:26.946887 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 60776 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:48:26.948085 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:48:26.952332 systemd-logind[1512]: New session 4 of user core. Nov 23 22:48:26.958692 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 22:48:27.009711 sshd[1689]: Connection closed by 10.0.0.1 port 60776 Nov 23 22:48:27.010012 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Nov 23 22:48:27.028426 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:60776.service: Deactivated successfully. Nov 23 22:48:27.030006 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 22:48:27.030824 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. Nov 23 22:48:27.032609 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:60780.service - OpenSSH per-connection server daemon (10.0.0.1:60780). Nov 23 22:48:27.033534 systemd-logind[1512]: Removed session 4. Nov 23 22:48:27.079853 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 60780 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:48:27.080957 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:48:27.084565 systemd-logind[1512]: New session 5 of user core. Nov 23 22:48:27.101677 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 22:48:27.158309 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 22:48:27.158605 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:48:27.178416 sudo[1699]: pam_unix(sudo:session): session closed for user root Nov 23 22:48:27.180679 sshd[1698]: Connection closed by 10.0.0.1 port 60780 Nov 23 22:48:27.180458 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Nov 23 22:48:27.192564 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:60780.service: Deactivated successfully. Nov 23 22:48:27.194827 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 22:48:27.195587 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. Nov 23 22:48:27.197884 systemd-logind[1512]: Removed session 5. Nov 23 22:48:27.199081 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:60786.service - OpenSSH per-connection server daemon (10.0.0.1:60786). Nov 23 22:48:27.260141 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 60786 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:48:27.261411 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:48:27.265118 systemd-logind[1512]: New session 6 of user core. Nov 23 22:48:27.272705 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 22:48:27.323436 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 22:48:27.323715 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:48:27.404671 sudo[1710]: pam_unix(sudo:session): session closed for user root Nov 23 22:48:27.409612 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 22:48:27.409863 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:48:27.418848 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:48:27.460287 augenrules[1732]: No rules Nov 23 22:48:27.461106 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:48:27.462558 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:48:27.463753 sudo[1709]: pam_unix(sudo:session): session closed for user root Nov 23 22:48:27.465570 sshd[1708]: Connection closed by 10.0.0.1 port 60786 Nov 23 22:48:27.465409 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Nov 23 22:48:27.476495 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:60786.service: Deactivated successfully. Nov 23 22:48:27.478892 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 22:48:27.480690 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. Nov 23 22:48:27.482762 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:60788.service - OpenSSH per-connection server daemon (10.0.0.1:60788). Nov 23 22:48:27.483646 systemd-logind[1512]: Removed session 6. Nov 23 22:48:27.539994 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 60788 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:48:27.541179 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:48:27.544858 systemd-logind[1512]: New session 7 of user core. Nov 23 22:48:27.561694 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 22:48:27.611446 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 22:48:27.611748 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:48:27.883786 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 22:48:27.897039 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 22:48:28.097140 dockerd[1766]: time="2025-11-23T22:48:28.096755626Z" level=info msg="Starting up" Nov 23 22:48:28.097684 dockerd[1766]: time="2025-11-23T22:48:28.097659066Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 22:48:28.108927 dockerd[1766]: time="2025-11-23T22:48:28.108876306Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 22:48:28.137566 dockerd[1766]: time="2025-11-23T22:48:28.137431626Z" level=info msg="Loading containers: start." Nov 23 22:48:28.146556 kernel: Initializing XFRM netlink socket Nov 23 22:48:28.342011 systemd-networkd[1438]: docker0: Link UP Nov 23 22:48:28.345967 dockerd[1766]: time="2025-11-23T22:48:28.345922066Z" level=info msg="Loading containers: done." Nov 23 22:48:28.361020 dockerd[1766]: time="2025-11-23T22:48:28.360973986Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 22:48:28.361177 dockerd[1766]: time="2025-11-23T22:48:28.361073866Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 22:48:28.361177 dockerd[1766]: time="2025-11-23T22:48:28.361163986Z" level=info msg="Initializing buildkit" Nov 23 22:48:28.383563 dockerd[1766]: time="2025-11-23T22:48:28.383525786Z" level=info msg="Completed buildkit initialization" Nov 23 22:48:28.390271 dockerd[1766]: time="2025-11-23T22:48:28.390087506Z" level=info msg="Daemon has completed initialization" Nov 23 22:48:28.390373 dockerd[1766]: time="2025-11-23T22:48:28.390166706Z" level=info msg="API listen on /run/docker.sock" Nov 23 22:48:28.390316 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 22:48:28.832561 containerd[1537]: time="2025-11-23T22:48:28.832445986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\"" Nov 23 22:48:29.372829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007954944.mount: Deactivated successfully. Nov 23 22:48:30.181286 containerd[1537]: time="2025-11-23T22:48:30.181224506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:30.182112 containerd[1537]: time="2025-11-23T22:48:30.182065146Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.2: active requests=0, bytes read=24563046" Nov 23 22:48:30.183156 containerd[1537]: time="2025-11-23T22:48:30.183113626Z" level=info msg="ImageCreate event name:\"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:30.186014 containerd[1537]: time="2025-11-23T22:48:30.185977026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:30.187581 containerd[1537]: time="2025-11-23T22:48:30.187540266Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.2\" with image id \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\", size \"24559643\" in 1.35505336s" Nov 23 22:48:30.187622 containerd[1537]: time="2025-11-23T22:48:30.187581426Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\" returns image reference \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\"" Nov 23 22:48:30.188085 containerd[1537]: time="2025-11-23T22:48:30.188026026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\"" Nov 23 22:48:31.143005 containerd[1537]: time="2025-11-23T22:48:31.142953266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:31.144417 containerd[1537]: time="2025-11-23T22:48:31.144187266Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.2: active requests=0, bytes read=19134214" Nov 23 22:48:31.145158 containerd[1537]: time="2025-11-23T22:48:31.145128866Z" level=info msg="ImageCreate event name:\"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:31.147681 containerd[1537]: time="2025-11-23T22:48:31.147652866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:31.148805 containerd[1537]: time="2025-11-23T22:48:31.148777266Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.2\" with image id \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\", size \"20718696\" in 960.7214ms" Nov 23 22:48:31.148864 containerd[1537]: time="2025-11-23T22:48:31.148808426Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\" returns image reference \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\"" Nov 23 22:48:31.149222 containerd[1537]: time="2025-11-23T22:48:31.149203266Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\"" Nov 23 22:48:32.012543 containerd[1537]: time="2025-11-23T22:48:32.012400466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:32.013356 containerd[1537]: time="2025-11-23T22:48:32.013189746Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.2: active requests=0, bytes read=14191285" Nov 23 22:48:32.014591 containerd[1537]: time="2025-11-23T22:48:32.014563346Z" level=info msg="ImageCreate event name:\"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:32.017726 containerd[1537]: time="2025-11-23T22:48:32.017696306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:32.019278 containerd[1537]: time="2025-11-23T22:48:32.019176426Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.2\" with image id \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\", size \"15775785\" in 869.94468ms" Nov 23 22:48:32.019278 containerd[1537]: time="2025-11-23T22:48:32.019204426Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\" returns image reference \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\"" Nov 23 22:48:32.019694 containerd[1537]: time="2025-11-23T22:48:32.019676066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\"" Nov 23 22:48:32.853812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 22:48:32.855142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:48:33.014995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:33.035830 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:48:33.075423 kubelet[2062]: E1123 22:48:33.075334 2062 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:48:33.078463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:48:33.078611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:48:33.079117 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.9M memory peak. Nov 23 22:48:33.139531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292528146.mount: Deactivated successfully. Nov 23 22:48:33.366229 containerd[1537]: time="2025-11-23T22:48:33.366167826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:33.367273 containerd[1537]: time="2025-11-23T22:48:33.367225586Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.2: active requests=0, bytes read=22803243" Nov 23 22:48:33.368435 containerd[1537]: time="2025-11-23T22:48:33.368403546Z" level=info msg="ImageCreate event name:\"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:33.372900 containerd[1537]: time="2025-11-23T22:48:33.372628546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:33.373360 containerd[1537]: time="2025-11-23T22:48:33.373319026Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.2\" with image id \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\", repo tag \"registry.k8s.io/kube-proxy:v1.34.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\", size \"22802260\" in 1.35355748s" Nov 23 22:48:33.373360 containerd[1537]: time="2025-11-23T22:48:33.373355026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\" returns image reference \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\"" Nov 23 22:48:33.373970 containerd[1537]: time="2025-11-23T22:48:33.373929546Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 23 22:48:33.886319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264034107.mount: Deactivated successfully. Nov 23 22:48:34.728519 containerd[1537]: time="2025-11-23T22:48:34.728449146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:34.729320 containerd[1537]: time="2025-11-23T22:48:34.729281106Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Nov 23 22:48:34.732501 containerd[1537]: time="2025-11-23T22:48:34.732463186Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:34.739011 containerd[1537]: time="2025-11-23T22:48:34.738963026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:34.741344 containerd[1537]: time="2025-11-23T22:48:34.741298706Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.3673328s" Nov 23 22:48:34.741344 containerd[1537]: time="2025-11-23T22:48:34.741337266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 23 22:48:34.741747 containerd[1537]: time="2025-11-23T22:48:34.741716546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 23 22:48:35.197949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578292622.mount: Deactivated successfully. Nov 23 22:48:35.204099 containerd[1537]: time="2025-11-23T22:48:35.204044066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:35.204897 containerd[1537]: time="2025-11-23T22:48:35.204684266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Nov 23 22:48:35.205796 containerd[1537]: time="2025-11-23T22:48:35.205747866Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:35.208404 containerd[1537]: time="2025-11-23T22:48:35.208352986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:35.208934 containerd[1537]: time="2025-11-23T22:48:35.208897746Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 467.14984ms" Nov 23 22:48:35.208934 containerd[1537]: time="2025-11-23T22:48:35.208932386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 23 22:48:35.210238 containerd[1537]: time="2025-11-23T22:48:35.209599946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 23 22:48:35.766813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826564990.mount: Deactivated successfully. Nov 23 22:48:37.915882 containerd[1537]: time="2025-11-23T22:48:37.915818626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:37.916681 containerd[1537]: time="2025-11-23T22:48:37.916646146Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Nov 23 22:48:37.919649 containerd[1537]: time="2025-11-23T22:48:37.919592626Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:37.923250 containerd[1537]: time="2025-11-23T22:48:37.923212786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:37.925083 containerd[1537]: time="2025-11-23T22:48:37.924957306Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.71532532s" Nov 23 22:48:37.925083 containerd[1537]: time="2025-11-23T22:48:37.924993586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 23 22:48:41.820414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:41.820585 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.9M memory peak. Nov 23 22:48:41.823491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:48:41.848487 systemd[1]: Reload requested from client PID 2216 ('systemctl') (unit session-7.scope)... Nov 23 22:48:41.848505 systemd[1]: Reloading... Nov 23 22:48:41.919842 zram_generator::config[2255]: No configuration found. Nov 23 22:48:42.126077 systemd[1]: Reloading finished in 277 ms. Nov 23 22:48:42.172236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:42.175639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:48:42.176823 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:48:42.177206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:42.177250 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95.1M memory peak. Nov 23 22:48:42.179793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:48:42.317192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:42.326883 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:48:42.362800 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:48:42.362800 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:48:42.363563 kubelet[2305]: I1123 22:48:42.363500 2305 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:48:43.227629 kubelet[2305]: I1123 22:48:43.227576 2305 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 22:48:43.227629 kubelet[2305]: I1123 22:48:43.227613 2305 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:48:43.228686 kubelet[2305]: I1123 22:48:43.228657 2305 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 22:48:43.228686 kubelet[2305]: I1123 22:48:43.228680 2305 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:48:43.228971 kubelet[2305]: I1123 22:48:43.228943 2305 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:48:43.236711 kubelet[2305]: E1123 22:48:43.236528 2305 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 22:48:43.237307 kubelet[2305]: I1123 22:48:43.237285 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:48:43.240705 kubelet[2305]: I1123 22:48:43.240678 2305 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:48:43.243429 kubelet[2305]: I1123 22:48:43.243410 2305 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 22:48:43.243782 kubelet[2305]: I1123 22:48:43.243749 2305 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:48:43.244086 kubelet[2305]: I1123 22:48:43.243862 2305 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:48:43.244086 kubelet[2305]: I1123 22:48:43.244033 2305 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:48:43.244086 kubelet[2305]: I1123 22:48:43.244041 2305 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 22:48:43.244331 kubelet[2305]: I1123 22:48:43.244306 2305 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 22:48:43.247470 kubelet[2305]: I1123 22:48:43.247442 2305 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:48:43.249525 kubelet[2305]: I1123 22:48:43.248705 2305 kubelet.go:475] "Attempting to sync node with API server" Nov 23 22:48:43.249525 kubelet[2305]: I1123 22:48:43.248729 2305 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:48:43.249525 kubelet[2305]: I1123 22:48:43.248753 2305 kubelet.go:387] "Adding apiserver pod source" Nov 23 22:48:43.249525 kubelet[2305]: I1123 22:48:43.248763 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:48:43.250664 kubelet[2305]: I1123 22:48:43.250644 2305 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:48:43.251531 kubelet[2305]: I1123 22:48:43.251490 2305 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:48:43.251642 kubelet[2305]: I1123 22:48:43.251629 2305 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 22:48:43.251738 kubelet[2305]: W1123 22:48:43.251727 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 22:48:43.252892 kubelet[2305]: E1123 22:48:43.250823 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 22:48:43.252892 kubelet[2305]: E1123 22:48:43.250734 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 22:48:43.254553 kubelet[2305]: I1123 22:48:43.254536 2305 server.go:1262] "Started kubelet" Nov 23 22:48:43.254786 kubelet[2305]: I1123 22:48:43.254762 2305 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:48:43.255084 kubelet[2305]: I1123 22:48:43.255036 2305 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:48:43.255123 kubelet[2305]: I1123 22:48:43.255091 2305 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 22:48:43.255405 kubelet[2305]: I1123 22:48:43.255366 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:48:43.255405 kubelet[2305]: I1123 22:48:43.255387 2305 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:48:43.255910 kubelet[2305]: I1123 22:48:43.255891 2305 server.go:310] "Adding debug handlers to kubelet server" Nov 23 22:48:43.258776 kubelet[2305]: I1123 22:48:43.258742 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:48:43.259314 kubelet[2305]: I1123 22:48:43.259292 2305 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 22:48:43.260306 kubelet[2305]: E1123 22:48:43.260275 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 22:48:43.260991 kubelet[2305]: I1123 22:48:43.260966 2305 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 22:48:43.261117 kubelet[2305]: I1123 22:48:43.261107 2305 reconciler.go:29] "Reconciler: start to sync state" Nov 23 22:48:43.261270 kubelet[2305]: I1123 22:48:43.261229 2305 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:48:43.261554 kubelet[2305]: E1123 22:48:43.261484 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="200ms" Nov 23 22:48:43.261803 kubelet[2305]: E1123 22:48:43.261780 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 22:48:43.262002 kubelet[2305]: E1123 22:48:43.261981 2305 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:48:43.262585 kubelet[2305]: I1123 22:48:43.262553 2305 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:48:43.262585 kubelet[2305]: I1123 22:48:43.262586 2305 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:48:43.263898 kubelet[2305]: E1123 22:48:43.262719 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac4609a5a33da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 22:48:43.254477786 +0000 UTC m=+0.924492881,LastTimestamp:2025-11-23 22:48:43.254477786 +0000 UTC m=+0.924492881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 22:48:43.271291 kubelet[2305]: I1123 22:48:43.271265 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:48:43.271291 kubelet[2305]: I1123 22:48:43.271283 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:48:43.271410 kubelet[2305]: I1123 22:48:43.271304 2305 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:48:43.273355 kubelet[2305]: I1123 22:48:43.273307 2305 policy_none.go:49] "None policy: Start" Nov 23 22:48:43.273355 kubelet[2305]: I1123 22:48:43.273339 2305 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 22:48:43.273355 kubelet[2305]: I1123 22:48:43.273353 2305 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 22:48:43.275247 kubelet[2305]: I1123 22:48:43.274606 2305 policy_none.go:47] "Start" Nov 23 22:48:43.279216 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 22:48:43.281418 kubelet[2305]: I1123 22:48:43.281382 2305 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 22:48:43.282499 kubelet[2305]: I1123 22:48:43.282464 2305 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 22:48:43.282499 kubelet[2305]: I1123 22:48:43.282497 2305 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 22:48:43.282609 kubelet[2305]: I1123 22:48:43.282550 2305 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 22:48:43.282609 kubelet[2305]: E1123 22:48:43.282596 2305 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:48:43.283072 kubelet[2305]: E1123 22:48:43.283032 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 22:48:43.290433 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 22:48:43.293789 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 22:48:43.304684 kubelet[2305]: E1123 22:48:43.304542 2305 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:48:43.304799 kubelet[2305]: I1123 22:48:43.304777 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:48:43.304826 kubelet[2305]: I1123 22:48:43.304794 2305 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:48:43.305224 kubelet[2305]: I1123 22:48:43.305177 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:48:43.306898 kubelet[2305]: E1123 22:48:43.306872 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:48:43.306988 kubelet[2305]: E1123 22:48:43.306923 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 23 22:48:43.394119 systemd[1]: Created slice kubepods-burstable-pod47f83fd1945de69dee67dc7f4d962463.slice - libcontainer container kubepods-burstable-pod47f83fd1945de69dee67dc7f4d962463.slice. Nov 23 22:48:43.405886 kubelet[2305]: I1123 22:48:43.405853 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 22:48:43.406355 kubelet[2305]: E1123 22:48:43.406296 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Nov 23 22:48:43.408505 kubelet[2305]: E1123 22:48:43.408479 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:43.410248 systemd[1]: Created slice kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice - libcontainer container kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice. Nov 23 22:48:43.411759 kubelet[2305]: E1123 22:48:43.411738 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:43.428617 systemd[1]: Created slice kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice - libcontainer container kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice. Nov 23 22:48:43.430225 kubelet[2305]: E1123 22:48:43.430184 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:43.461565 kubelet[2305]: I1123 22:48:43.461501 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47f83fd1945de69dee67dc7f4d962463-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47f83fd1945de69dee67dc7f4d962463\") " pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:43.461955 kubelet[2305]: E1123 22:48:43.461914 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="400ms" Nov 23 22:48:43.562541 kubelet[2305]: I1123 22:48:43.562427 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47f83fd1945de69dee67dc7f4d962463-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47f83fd1945de69dee67dc7f4d962463\") " pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:43.562584 kubelet[2305]: I1123 22:48:43.562499 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:43.562584 kubelet[2305]: I1123 22:48:43.562572 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:43.562629 kubelet[2305]: I1123 22:48:43.562588 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:43.562629 kubelet[2305]: I1123 22:48:43.562604 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:43.562713 kubelet[2305]: I1123 22:48:43.562695 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47f83fd1945de69dee67dc7f4d962463-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47f83fd1945de69dee67dc7f4d962463\") " pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:43.562871 kubelet[2305]: I1123 22:48:43.562719 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:43.562871 kubelet[2305]: I1123 22:48:43.562862 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:43.607721 kubelet[2305]: I1123 22:48:43.607699 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 22:48:43.608117 kubelet[2305]: E1123 22:48:43.608075 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Nov 23 22:48:43.711518 containerd[1537]: time="2025-11-23T22:48:43.711473106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47f83fd1945de69dee67dc7f4d962463,Namespace:kube-system,Attempt:0,}" Nov 23 22:48:43.715766 containerd[1537]: time="2025-11-23T22:48:43.715726666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,}" Nov 23 22:48:43.732915 containerd[1537]: time="2025-11-23T22:48:43.732872546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,}" Nov 23 22:48:43.862564 kubelet[2305]: E1123 22:48:43.862447 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="800ms" Nov 23 22:48:44.009695 kubelet[2305]: I1123 22:48:44.009665 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 22:48:44.010105 kubelet[2305]: E1123 22:48:44.010057 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Nov 23 22:48:44.119702 kubelet[2305]: E1123 22:48:44.119567 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 22:48:44.185756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918812997.mount: Deactivated successfully. Nov 23 22:48:44.191075 containerd[1537]: time="2025-11-23T22:48:44.191022786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:48:44.193523 containerd[1537]: time="2025-11-23T22:48:44.193475826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 23 22:48:44.194823 containerd[1537]: time="2025-11-23T22:48:44.194765066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:48:44.195595 containerd[1537]: time="2025-11-23T22:48:44.195566986Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:48:44.196803 containerd[1537]: time="2025-11-23T22:48:44.196559746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 22:48:44.197384 containerd[1537]: time="2025-11-23T22:48:44.197356826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:48:44.198102 containerd[1537]: time="2025-11-23T22:48:44.198029066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 22:48:44.200526 containerd[1537]: time="2025-11-23T22:48:44.199985386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:48:44.200987 containerd[1537]: time="2025-11-23T22:48:44.200965226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 487.80908ms" Nov 23 22:48:44.201809 containerd[1537]: time="2025-11-23T22:48:44.201592466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 467.57012ms" Nov 23 22:48:44.202745 kubelet[2305]: E1123 22:48:44.202714 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 22:48:44.204394 containerd[1537]: time="2025-11-23T22:48:44.204146786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 485.90116ms" Nov 23 22:48:44.224144 containerd[1537]: time="2025-11-23T22:48:44.224093666Z" level=info msg="connecting to shim 33d766f3dcf928a3b11a60f28a5e9fbe2751a95f9b1784cf2be11fa3d65a6da7" address="unix:///run/containerd/s/b367a5629e0d20e6430366403cbf80a50db2f1f8bbc1bbd4292f9a42dbda67d8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:48:44.227368 kubelet[2305]: E1123 22:48:44.227324 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 22:48:44.237929 containerd[1537]: time="2025-11-23T22:48:44.237882786Z" level=info msg="connecting to shim 3315a6ea4b65feaddeccc97cc1d87af5337626ab6f292d19f39c983a9ddde9af" address="unix:///run/containerd/s/f8cf5211d5668abf3d5ea3862b2c7b604c29b9f48deb91fcc36c0b9cfc1ed60b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:48:44.240708 containerd[1537]: time="2025-11-23T22:48:44.240672426Z" level=info msg="connecting to shim 5754af00106ea6aa2c315db17b930222a2c7c892e027add4fa1becf2f04a14aa" address="unix:///run/containerd/s/a9ba192a0a22975a89c4ce86074d4941ba6e8ab6637d303e2beb14add8ec223e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:48:44.272762 systemd[1]: Started cri-containerd-3315a6ea4b65feaddeccc97cc1d87af5337626ab6f292d19f39c983a9ddde9af.scope - libcontainer container 3315a6ea4b65feaddeccc97cc1d87af5337626ab6f292d19f39c983a9ddde9af. Nov 23 22:48:44.273950 systemd[1]: Started cri-containerd-33d766f3dcf928a3b11a60f28a5e9fbe2751a95f9b1784cf2be11fa3d65a6da7.scope - libcontainer container 33d766f3dcf928a3b11a60f28a5e9fbe2751a95f9b1784cf2be11fa3d65a6da7. Nov 23 22:48:44.277765 systemd[1]: Started cri-containerd-5754af00106ea6aa2c315db17b930222a2c7c892e027add4fa1becf2f04a14aa.scope - libcontainer container 5754af00106ea6aa2c315db17b930222a2c7c892e027add4fa1becf2f04a14aa. Nov 23 22:48:44.321678 containerd[1537]: time="2025-11-23T22:48:44.321484626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,} returns sandbox id \"3315a6ea4b65feaddeccc97cc1d87af5337626ab6f292d19f39c983a9ddde9af\"" Nov 23 22:48:44.325748 containerd[1537]: time="2025-11-23T22:48:44.325705906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47f83fd1945de69dee67dc7f4d962463,Namespace:kube-system,Attempt:0,} returns sandbox id \"33d766f3dcf928a3b11a60f28a5e9fbe2751a95f9b1784cf2be11fa3d65a6da7\"" Nov 23 22:48:44.329132 containerd[1537]: time="2025-11-23T22:48:44.328155866Z" level=info msg="CreateContainer within sandbox \"3315a6ea4b65feaddeccc97cc1d87af5337626ab6f292d19f39c983a9ddde9af\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 22:48:44.331681 containerd[1537]: time="2025-11-23T22:48:44.331650266Z" level=info msg="CreateContainer within sandbox \"33d766f3dcf928a3b11a60f28a5e9fbe2751a95f9b1784cf2be11fa3d65a6da7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 22:48:44.333475 containerd[1537]: time="2025-11-23T22:48:44.332857906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,} returns sandbox id \"5754af00106ea6aa2c315db17b930222a2c7c892e027add4fa1becf2f04a14aa\"" Nov 23 22:48:44.338007 containerd[1537]: time="2025-11-23T22:48:44.337969306Z" level=info msg="Container fc878f290cf6dfcd99b4fa6cf97ef97da6cff5949002f6bc674466c1017fb397: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:48:44.338683 containerd[1537]: time="2025-11-23T22:48:44.338646466Z" level=info msg="CreateContainer within sandbox \"5754af00106ea6aa2c315db17b930222a2c7c892e027add4fa1becf2f04a14aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 22:48:44.344243 containerd[1537]: time="2025-11-23T22:48:44.344207466Z" level=info msg="Container 8b040ae8562c38020b9a090461b050ec2531381947c29169d8d7a7d0f40bb928: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:48:44.349952 containerd[1537]: time="2025-11-23T22:48:44.349910426Z" level=info msg="Container 90c2fca9ad680b3b9ee8317a6a787ee5ba5b1d746dba28ae9c728837a76e37b2: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:48:44.350289 containerd[1537]: time="2025-11-23T22:48:44.350261666Z" level=info msg="CreateContainer within sandbox \"3315a6ea4b65feaddeccc97cc1d87af5337626ab6f292d19f39c983a9ddde9af\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc878f290cf6dfcd99b4fa6cf97ef97da6cff5949002f6bc674466c1017fb397\"" Nov 23 22:48:44.351117 containerd[1537]: time="2025-11-23T22:48:44.351092226Z" level=info msg="StartContainer for \"fc878f290cf6dfcd99b4fa6cf97ef97da6cff5949002f6bc674466c1017fb397\"" Nov 23 22:48:44.352715 containerd[1537]: time="2025-11-23T22:48:44.352650786Z" level=info msg="connecting to shim fc878f290cf6dfcd99b4fa6cf97ef97da6cff5949002f6bc674466c1017fb397" address="unix:///run/containerd/s/f8cf5211d5668abf3d5ea3862b2c7b604c29b9f48deb91fcc36c0b9cfc1ed60b" protocol=ttrpc version=3 Nov 23 22:48:44.354775 containerd[1537]: time="2025-11-23T22:48:44.354740026Z" level=info msg="CreateContainer within sandbox \"33d766f3dcf928a3b11a60f28a5e9fbe2751a95f9b1784cf2be11fa3d65a6da7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b040ae8562c38020b9a090461b050ec2531381947c29169d8d7a7d0f40bb928\"" Nov 23 22:48:44.355429 containerd[1537]: time="2025-11-23T22:48:44.355404306Z" level=info msg="StartContainer for \"8b040ae8562c38020b9a090461b050ec2531381947c29169d8d7a7d0f40bb928\"" Nov 23 22:48:44.356606 containerd[1537]: time="2025-11-23T22:48:44.356576186Z" level=info msg="connecting to shim 8b040ae8562c38020b9a090461b050ec2531381947c29169d8d7a7d0f40bb928" address="unix:///run/containerd/s/b367a5629e0d20e6430366403cbf80a50db2f1f8bbc1bbd4292f9a42dbda67d8" protocol=ttrpc version=3 Nov 23 22:48:44.359369 containerd[1537]: time="2025-11-23T22:48:44.359331306Z" level=info msg="CreateContainer within sandbox \"5754af00106ea6aa2c315db17b930222a2c7c892e027add4fa1becf2f04a14aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90c2fca9ad680b3b9ee8317a6a787ee5ba5b1d746dba28ae9c728837a76e37b2\"" Nov 23 22:48:44.359922 containerd[1537]: time="2025-11-23T22:48:44.359890666Z" level=info msg="StartContainer for \"90c2fca9ad680b3b9ee8317a6a787ee5ba5b1d746dba28ae9c728837a76e37b2\"" Nov 23 22:48:44.361627 containerd[1537]: time="2025-11-23T22:48:44.361593546Z" level=info msg="connecting to shim 90c2fca9ad680b3b9ee8317a6a787ee5ba5b1d746dba28ae9c728837a76e37b2" address="unix:///run/containerd/s/a9ba192a0a22975a89c4ce86074d4941ba6e8ab6637d303e2beb14add8ec223e" protocol=ttrpc version=3 Nov 23 22:48:44.373726 systemd[1]: Started cri-containerd-fc878f290cf6dfcd99b4fa6cf97ef97da6cff5949002f6bc674466c1017fb397.scope - libcontainer container fc878f290cf6dfcd99b4fa6cf97ef97da6cff5949002f6bc674466c1017fb397. Nov 23 22:48:44.377215 systemd[1]: Started cri-containerd-8b040ae8562c38020b9a090461b050ec2531381947c29169d8d7a7d0f40bb928.scope - libcontainer container 8b040ae8562c38020b9a090461b050ec2531381947c29169d8d7a7d0f40bb928. Nov 23 22:48:44.381744 systemd[1]: Started cri-containerd-90c2fca9ad680b3b9ee8317a6a787ee5ba5b1d746dba28ae9c728837a76e37b2.scope - libcontainer container 90c2fca9ad680b3b9ee8317a6a787ee5ba5b1d746dba28ae9c728837a76e37b2. Nov 23 22:48:44.432916 containerd[1537]: time="2025-11-23T22:48:44.432874026Z" level=info msg="StartContainer for \"fc878f290cf6dfcd99b4fa6cf97ef97da6cff5949002f6bc674466c1017fb397\" returns successfully" Nov 23 22:48:44.437034 containerd[1537]: time="2025-11-23T22:48:44.435729306Z" level=info msg="StartContainer for \"90c2fca9ad680b3b9ee8317a6a787ee5ba5b1d746dba28ae9c728837a76e37b2\" returns successfully" Nov 23 22:48:44.437761 containerd[1537]: time="2025-11-23T22:48:44.437729546Z" level=info msg="StartContainer for \"8b040ae8562c38020b9a090461b050ec2531381947c29169d8d7a7d0f40bb928\" returns successfully" Nov 23 22:48:44.814155 kubelet[2305]: I1123 22:48:44.813257 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 22:48:45.291213 kubelet[2305]: E1123 22:48:45.291099 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:45.296123 kubelet[2305]: E1123 22:48:45.296092 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:45.300428 kubelet[2305]: E1123 22:48:45.300398 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:46.302437 kubelet[2305]: E1123 22:48:46.302359 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:46.303422 kubelet[2305]: E1123 22:48:46.303287 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 22:48:47.250307 kubelet[2305]: I1123 22:48:47.250257 2305 apiserver.go:52] "Watching apiserver" Nov 23 22:48:47.262849 kubelet[2305]: E1123 22:48:47.262794 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 23 22:48:47.327383 kubelet[2305]: I1123 22:48:47.327288 2305 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 22:48:47.327383 kubelet[2305]: E1123 22:48:47.327372 2305 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 23 22:48:47.359395 kubelet[2305]: I1123 22:48:47.359343 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:47.361969 kubelet[2305]: I1123 22:48:47.361924 2305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 22:48:47.365797 kubelet[2305]: E1123 22:48:47.365761 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:47.365797 kubelet[2305]: I1123 22:48:47.365796 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:47.369548 kubelet[2305]: E1123 22:48:47.368380 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:47.369548 kubelet[2305]: I1123 22:48:47.368414 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:47.372787 kubelet[2305]: E1123 22:48:47.372743 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:47.585569 kubelet[2305]: I1123 22:48:47.585444 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:47.587647 kubelet[2305]: E1123 22:48:47.587624 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:49.770251 systemd[1]: Reload requested from client PID 2596 ('systemctl') (unit session-7.scope)... Nov 23 22:48:49.770268 systemd[1]: Reloading... Nov 23 22:48:49.854734 zram_generator::config[2639]: No configuration found. Nov 23 22:48:50.031022 systemd[1]: Reloading finished in 260 ms. Nov 23 22:48:50.061908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:48:50.078602 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:48:50.078856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:50.078912 systemd[1]: kubelet.service: Consumed 1.272s CPU time, 121.9M memory peak. Nov 23 22:48:50.080633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:48:50.252973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:48:50.258255 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:48:50.308601 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:48:50.308601 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:48:50.308601 kubelet[2681]: I1123 22:48:50.308015 2681 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:48:50.314238 kubelet[2681]: I1123 22:48:50.314188 2681 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 22:48:50.314238 kubelet[2681]: I1123 22:48:50.314224 2681 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:48:50.314390 kubelet[2681]: I1123 22:48:50.314254 2681 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 22:48:50.314390 kubelet[2681]: I1123 22:48:50.314260 2681 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:48:50.316175 kubelet[2681]: I1123 22:48:50.314831 2681 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:48:50.316771 kubelet[2681]: I1123 22:48:50.316744 2681 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 22:48:50.319095 kubelet[2681]: I1123 22:48:50.318992 2681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:48:50.322699 kubelet[2681]: I1123 22:48:50.322672 2681 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:48:50.325859 kubelet[2681]: I1123 22:48:50.325827 2681 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 22:48:50.326101 kubelet[2681]: I1123 22:48:50.326073 2681 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:48:50.326287 kubelet[2681]: I1123 22:48:50.326102 2681 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:48:50.326287 kubelet[2681]: I1123 22:48:50.326276 2681 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:48:50.326287 kubelet[2681]: I1123 22:48:50.326284 2681 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 22:48:50.326422 kubelet[2681]: I1123 22:48:50.326308 2681 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 22:48:50.327576 kubelet[2681]: I1123 22:48:50.327544 2681 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:48:50.327958 kubelet[2681]: I1123 22:48:50.327857 2681 kubelet.go:475] "Attempting to sync node with API server" Nov 23 22:48:50.327958 kubelet[2681]: I1123 22:48:50.327884 2681 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:48:50.327958 kubelet[2681]: I1123 22:48:50.327908 2681 kubelet.go:387] "Adding apiserver pod source" Nov 23 22:48:50.327958 kubelet[2681]: I1123 22:48:50.327927 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:48:50.332313 kubelet[2681]: I1123 22:48:50.331635 2681 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:48:50.332404 kubelet[2681]: I1123 22:48:50.332385 2681 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:48:50.332451 kubelet[2681]: I1123 22:48:50.332426 2681 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 22:48:50.335001 kubelet[2681]: I1123 22:48:50.334763 2681 server.go:1262] "Started kubelet" Nov 23 22:48:50.339520 kubelet[2681]: I1123 22:48:50.337092 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:48:50.339520 kubelet[2681]: I1123 22:48:50.339266 2681 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:48:50.339520 kubelet[2681]: I1123 22:48:50.339338 2681 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 22:48:50.339665 kubelet[2681]: I1123 22:48:50.339599 2681 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:48:50.342382 kubelet[2681]: I1123 22:48:50.341575 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:48:50.343575 kubelet[2681]: I1123 22:48:50.343187 2681 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 22:48:50.343575 kubelet[2681]: E1123 22:48:50.343298 2681 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 22:48:50.343575 kubelet[2681]: I1123 22:48:50.343312 2681 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:48:50.343575 kubelet[2681]: I1123 22:48:50.343428 2681 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:48:50.349235 kubelet[2681]: I1123 22:48:50.349177 2681 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 22:48:50.349235 kubelet[2681]: I1123 22:48:50.343166 2681 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:48:50.350202 kubelet[2681]: I1123 22:48:50.350168 2681 server.go:310] "Adding debug handlers to kubelet server" Nov 23 22:48:50.351272 kubelet[2681]: I1123 22:48:50.351224 2681 reconciler.go:29] "Reconciler: start to sync state" Nov 23 22:48:50.353743 kubelet[2681]: I1123 22:48:50.352447 2681 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:48:50.357561 kubelet[2681]: E1123 22:48:50.356613 2681 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:48:50.363109 kubelet[2681]: I1123 22:48:50.363064 2681 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 22:48:50.364429 kubelet[2681]: I1123 22:48:50.364393 2681 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 22:48:50.364429 kubelet[2681]: I1123 22:48:50.364419 2681 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 22:48:50.364559 kubelet[2681]: I1123 22:48:50.364441 2681 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 22:48:50.364559 kubelet[2681]: E1123 22:48:50.364484 2681 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.394796 2681 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.394830 2681 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.394855 2681 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.395005 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.395015 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.395032 2681 policy_none.go:49] "None policy: Start" Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.395044 2681 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 22:48:50.395134 kubelet[2681]: I1123 22:48:50.395063 2681 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 22:48:50.395391 kubelet[2681]: I1123 22:48:50.395165 2681 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 23 22:48:50.395391 kubelet[2681]: I1123 22:48:50.395173 2681 policy_none.go:47] "Start" Nov 23 22:48:50.399887 kubelet[2681]: E1123 22:48:50.399855 2681 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:48:50.400144 kubelet[2681]: I1123 22:48:50.400074 2681 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:48:50.400144 kubelet[2681]: I1123 22:48:50.400091 2681 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:48:50.400393 kubelet[2681]: I1123 22:48:50.400335 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:48:50.402228 kubelet[2681]: E1123 22:48:50.402032 2681 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:48:50.466102 kubelet[2681]: I1123 22:48:50.465738 2681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:50.466389 kubelet[2681]: I1123 22:48:50.465874 2681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:50.466467 kubelet[2681]: I1123 22:48:50.465902 2681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:50.504040 kubelet[2681]: I1123 22:48:50.504001 2681 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 22:48:50.513406 kubelet[2681]: I1123 22:48:50.513362 2681 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 23 22:48:50.513557 kubelet[2681]: I1123 22:48:50.513450 2681 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 22:48:50.552094 kubelet[2681]: I1123 22:48:50.552048 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47f83fd1945de69dee67dc7f4d962463-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47f83fd1945de69dee67dc7f4d962463\") " pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:50.552094 kubelet[2681]: I1123 22:48:50.552088 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47f83fd1945de69dee67dc7f4d962463-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47f83fd1945de69dee67dc7f4d962463\") " pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:50.552262 kubelet[2681]: I1123 22:48:50.552112 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:50.552262 kubelet[2681]: I1123 22:48:50.552129 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47f83fd1945de69dee67dc7f4d962463-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47f83fd1945de69dee67dc7f4d962463\") " pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:50.552262 kubelet[2681]: I1123 22:48:50.552146 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:50.552262 kubelet[2681]: I1123 22:48:50.552159 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:50.552262 kubelet[2681]: I1123 22:48:50.552173 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:50.552364 kubelet[2681]: I1123 22:48:50.552186 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 22:48:50.552364 kubelet[2681]: I1123 22:48:50.552214 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:51.329108 kubelet[2681]: I1123 22:48:51.329069 2681 apiserver.go:52] "Watching apiserver" Nov 23 22:48:51.349600 kubelet[2681]: I1123 22:48:51.349562 2681 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 22:48:51.386843 kubelet[2681]: I1123 22:48:51.386678 2681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:51.386843 kubelet[2681]: I1123 22:48:51.386828 2681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:51.391434 kubelet[2681]: E1123 22:48:51.391387 2681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 23 22:48:51.391948 kubelet[2681]: E1123 22:48:51.391914 2681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 23 22:48:51.416922 kubelet[2681]: I1123 22:48:51.416847 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.416829386 podStartE2EDuration="1.416829386s" podCreationTimestamp="2025-11-23 22:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:48:51.408144226 +0000 UTC m=+1.146597841" watchObservedRunningTime="2025-11-23 22:48:51.416829386 +0000 UTC m=+1.155282921" Nov 23 22:48:51.427461 kubelet[2681]: I1123 22:48:51.427397 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4273803059999999 podStartE2EDuration="1.427380306s" podCreationTimestamp="2025-11-23 22:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:48:51.417097506 +0000 UTC m=+1.155551041" watchObservedRunningTime="2025-11-23 22:48:51.427380306 +0000 UTC m=+1.165833881" Nov 23 22:48:51.427643 kubelet[2681]: I1123 22:48:51.427483 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.427479826 podStartE2EDuration="1.427479826s" podCreationTimestamp="2025-11-23 22:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:48:51.427470266 +0000 UTC m=+1.165923881" watchObservedRunningTime="2025-11-23 22:48:51.427479826 +0000 UTC m=+1.165933401" Nov 23 22:48:55.538201 kubelet[2681]: I1123 22:48:55.538163 2681 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 22:48:55.538565 containerd[1537]: time="2025-11-23T22:48:55.538464849Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 22:48:55.538762 kubelet[2681]: I1123 22:48:55.538646 2681 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 22:48:56.307145 systemd[1]: Created slice kubepods-besteffort-poda6a5c1b0_84d7_4040_a5ae_e0a4332b6e38.slice - libcontainer container kubepods-besteffort-poda6a5c1b0_84d7_4040_a5ae_e0a4332b6e38.slice. Nov 23 22:48:56.389138 kubelet[2681]: I1123 22:48:56.389095 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38-kube-proxy\") pod \"kube-proxy-6rpj2\" (UID: \"a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38\") " pod="kube-system/kube-proxy-6rpj2" Nov 23 22:48:56.389138 kubelet[2681]: I1123 22:48:56.389142 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38-xtables-lock\") pod \"kube-proxy-6rpj2\" (UID: \"a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38\") " pod="kube-system/kube-proxy-6rpj2" Nov 23 22:48:56.389314 kubelet[2681]: I1123 22:48:56.389157 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38-lib-modules\") pod \"kube-proxy-6rpj2\" (UID: \"a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38\") " pod="kube-system/kube-proxy-6rpj2" Nov 23 22:48:56.389314 kubelet[2681]: I1123 22:48:56.389174 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4pjb\" (UniqueName: \"kubernetes.io/projected/a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38-kube-api-access-x4pjb\") pod \"kube-proxy-6rpj2\" (UID: \"a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38\") " pod="kube-system/kube-proxy-6rpj2" Nov 23 22:48:56.629726 containerd[1537]: time="2025-11-23T22:48:56.629497457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6rpj2,Uid:a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38,Namespace:kube-system,Attempt:0,}" Nov 23 22:48:56.654287 containerd[1537]: time="2025-11-23T22:48:56.654081319Z" level=info msg="connecting to shim baa601df5dd1d384848f7b947322132bf8677ac44bfb2a596a4c29b4317a49d5" address="unix:///run/containerd/s/31274dc34becd2bce4614662c06da09a086b318931bda9d614091223c07cd2a4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:48:56.691797 systemd[1]: Started cri-containerd-baa601df5dd1d384848f7b947322132bf8677ac44bfb2a596a4c29b4317a49d5.scope - libcontainer container baa601df5dd1d384848f7b947322132bf8677ac44bfb2a596a4c29b4317a49d5. Nov 23 22:48:56.769901 containerd[1537]: time="2025-11-23T22:48:56.769833082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6rpj2,Uid:a6a5c1b0-84d7-4040-a5ae-e0a4332b6e38,Namespace:kube-system,Attempt:0,} returns sandbox id \"baa601df5dd1d384848f7b947322132bf8677ac44bfb2a596a4c29b4317a49d5\"" Nov 23 22:48:56.853743 containerd[1537]: time="2025-11-23T22:48:56.853673076Z" level=info msg="CreateContainer within sandbox \"baa601df5dd1d384848f7b947322132bf8677ac44bfb2a596a4c29b4317a49d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 22:48:56.884539 containerd[1537]: time="2025-11-23T22:48:56.884099976Z" level=info msg="Container 8407e25686300abe887da43645ef3a3fcce88cee4980d1b2155bcb2910115e29: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:48:56.912499 systemd[1]: Created slice kubepods-besteffort-podbf4086e1_7d7e_46ed_80bf_3717e24ee4cd.slice - libcontainer container kubepods-besteffort-podbf4086e1_7d7e_46ed_80bf_3717e24ee4cd.slice. Nov 23 22:48:56.921867 containerd[1537]: time="2025-11-23T22:48:56.921811983Z" level=info msg="CreateContainer within sandbox \"baa601df5dd1d384848f7b947322132bf8677ac44bfb2a596a4c29b4317a49d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8407e25686300abe887da43645ef3a3fcce88cee4980d1b2155bcb2910115e29\"" Nov 23 22:48:56.923017 containerd[1537]: time="2025-11-23T22:48:56.922986015Z" level=info msg="StartContainer for \"8407e25686300abe887da43645ef3a3fcce88cee4980d1b2155bcb2910115e29\"" Nov 23 22:48:56.924385 containerd[1537]: time="2025-11-23T22:48:56.924354085Z" level=info msg="connecting to shim 8407e25686300abe887da43645ef3a3fcce88cee4980d1b2155bcb2910115e29" address="unix:///run/containerd/s/31274dc34becd2bce4614662c06da09a086b318931bda9d614091223c07cd2a4" protocol=ttrpc version=3 Nov 23 22:48:56.941678 systemd[1]: Started cri-containerd-8407e25686300abe887da43645ef3a3fcce88cee4980d1b2155bcb2910115e29.scope - libcontainer container 8407e25686300abe887da43645ef3a3fcce88cee4980d1b2155bcb2910115e29. Nov 23 22:48:56.993953 kubelet[2681]: I1123 22:48:56.993909 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf4086e1-7d7e-46ed-80bf-3717e24ee4cd-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-hl6xq\" (UID: \"bf4086e1-7d7e-46ed-80bf-3717e24ee4cd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hl6xq" Nov 23 22:48:56.993953 kubelet[2681]: I1123 22:48:56.993951 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxps\" (UniqueName: \"kubernetes.io/projected/bf4086e1-7d7e-46ed-80bf-3717e24ee4cd-kube-api-access-ndxps\") pod \"tigera-operator-65cdcdfd6d-hl6xq\" (UID: \"bf4086e1-7d7e-46ed-80bf-3717e24ee4cd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hl6xq" Nov 23 22:48:57.032024 containerd[1537]: time="2025-11-23T22:48:57.031970561Z" level=info msg="StartContainer for \"8407e25686300abe887da43645ef3a3fcce88cee4980d1b2155bcb2910115e29\" returns successfully" Nov 23 22:48:57.218922 containerd[1537]: time="2025-11-23T22:48:57.218792015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hl6xq,Uid:bf4086e1-7d7e-46ed-80bf-3717e24ee4cd,Namespace:tigera-operator,Attempt:0,}" Nov 23 22:48:57.235553 containerd[1537]: time="2025-11-23T22:48:57.235079264Z" level=info msg="connecting to shim d6c6e01daacc076b7d7d48d8bd294474da74a89203163ffa384f0642ebde0d14" address="unix:///run/containerd/s/7ba6ce6bde77c5e1923dc214a9562f87beeba0cce5264166e5f61c8ae92b3e16" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:48:57.262695 systemd[1]: Started cri-containerd-d6c6e01daacc076b7d7d48d8bd294474da74a89203163ffa384f0642ebde0d14.scope - libcontainer container d6c6e01daacc076b7d7d48d8bd294474da74a89203163ffa384f0642ebde0d14. Nov 23 22:48:57.314294 containerd[1537]: time="2025-11-23T22:48:57.314249888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hl6xq,Uid:bf4086e1-7d7e-46ed-80bf-3717e24ee4cd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d6c6e01daacc076b7d7d48d8bd294474da74a89203163ffa384f0642ebde0d14\"" Nov 23 22:48:57.316502 containerd[1537]: time="2025-11-23T22:48:57.316275634Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 22:48:57.415161 kubelet[2681]: I1123 22:48:57.415075 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6rpj2" podStartSLOduration=1.415059045 podStartE2EDuration="1.415059045s" podCreationTimestamp="2025-11-23 22:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:48:57.414897006 +0000 UTC m=+7.153350581" watchObservedRunningTime="2025-11-23 22:48:57.415059045 +0000 UTC m=+7.153512620" Nov 23 22:48:58.867131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646747800.mount: Deactivated successfully. Nov 23 22:48:59.183575 containerd[1537]: time="2025-11-23T22:48:59.183449713Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:59.184467 containerd[1537]: time="2025-11-23T22:48:59.184268748Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 22:48:59.185408 containerd[1537]: time="2025-11-23T22:48:59.185376622Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:59.187639 containerd[1537]: time="2025-11-23T22:48:59.187608728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:48:59.188397 containerd[1537]: time="2025-11-23T22:48:59.188331004Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.87202261s" Nov 23 22:48:59.188397 containerd[1537]: time="2025-11-23T22:48:59.188364324Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 22:48:59.199319 containerd[1537]: time="2025-11-23T22:48:59.199283139Z" level=info msg="CreateContainer within sandbox \"d6c6e01daacc076b7d7d48d8bd294474da74a89203163ffa384f0642ebde0d14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 22:48:59.205370 containerd[1537]: time="2025-11-23T22:48:59.205327143Z" level=info msg="Container 72e30f961661a170bbcd86de6ffc7a94344ef66c4e9fe9f131dad028d4ccdfa3: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:48:59.218721 containerd[1537]: time="2025-11-23T22:48:59.218664784Z" level=info msg="CreateContainer within sandbox \"d6c6e01daacc076b7d7d48d8bd294474da74a89203163ffa384f0642ebde0d14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"72e30f961661a170bbcd86de6ffc7a94344ef66c4e9fe9f131dad028d4ccdfa3\"" Nov 23 22:48:59.219158 containerd[1537]: time="2025-11-23T22:48:59.219136261Z" level=info msg="StartContainer for \"72e30f961661a170bbcd86de6ffc7a94344ef66c4e9fe9f131dad028d4ccdfa3\"" Nov 23 22:48:59.221337 containerd[1537]: time="2025-11-23T22:48:59.221302808Z" level=info msg="connecting to shim 72e30f961661a170bbcd86de6ffc7a94344ef66c4e9fe9f131dad028d4ccdfa3" address="unix:///run/containerd/s/7ba6ce6bde77c5e1923dc214a9562f87beeba0cce5264166e5f61c8ae92b3e16" protocol=ttrpc version=3 Nov 23 22:48:59.241724 systemd[1]: Started cri-containerd-72e30f961661a170bbcd86de6ffc7a94344ef66c4e9fe9f131dad028d4ccdfa3.scope - libcontainer container 72e30f961661a170bbcd86de6ffc7a94344ef66c4e9fe9f131dad028d4ccdfa3. Nov 23 22:48:59.269448 containerd[1537]: time="2025-11-23T22:48:59.269409721Z" level=info msg="StartContainer for \"72e30f961661a170bbcd86de6ffc7a94344ef66c4e9fe9f131dad028d4ccdfa3\" returns successfully" Nov 23 22:48:59.416829 kubelet[2681]: I1123 22:48:59.416760 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-hl6xq" podStartSLOduration=1.542998045 podStartE2EDuration="3.416742564s" podCreationTimestamp="2025-11-23 22:48:56 +0000 UTC" firstStartedPulling="2025-11-23 22:48:57.315571239 +0000 UTC m=+7.054024774" lastFinishedPulling="2025-11-23 22:48:59.189315718 +0000 UTC m=+8.927769293" observedRunningTime="2025-11-23 22:48:59.416149287 +0000 UTC m=+9.154602862" watchObservedRunningTime="2025-11-23 22:48:59.416742564 +0000 UTC m=+9.155196099" Nov 23 22:49:04.871949 sudo[1745]: pam_unix(sudo:session): session closed for user root Nov 23 22:49:04.874975 sshd[1744]: Connection closed by 10.0.0.1 port 60788 Nov 23 22:49:04.875732 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:04.879984 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. Nov 23 22:49:04.880117 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:60788.service: Deactivated successfully. Nov 23 22:49:04.882869 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 22:49:04.883072 systemd[1]: session-7.scope: Consumed 5.902s CPU time, 223.4M memory peak. Nov 23 22:49:04.884462 systemd-logind[1512]: Removed session 7. Nov 23 22:49:04.904603 update_engine[1516]: I20251123 22:49:04.904537 1516 update_attempter.cc:509] Updating boot flags... Nov 23 22:49:12.052788 systemd[1]: Created slice kubepods-besteffort-pod48b0ea66_eb86_4b97_9837_20957a4549c9.slice - libcontainer container kubepods-besteffort-pod48b0ea66_eb86_4b97_9837_20957a4549c9.slice. Nov 23 22:49:12.104179 kubelet[2681]: I1123 22:49:12.104077 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48b0ea66-eb86-4b97-9837-20957a4549c9-tigera-ca-bundle\") pod \"calico-typha-b9668d594-lxmtm\" (UID: \"48b0ea66-eb86-4b97-9837-20957a4549c9\") " pod="calico-system/calico-typha-b9668d594-lxmtm" Nov 23 22:49:12.104179 kubelet[2681]: I1123 22:49:12.104130 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/48b0ea66-eb86-4b97-9837-20957a4549c9-typha-certs\") pod \"calico-typha-b9668d594-lxmtm\" (UID: \"48b0ea66-eb86-4b97-9837-20957a4549c9\") " pod="calico-system/calico-typha-b9668d594-lxmtm" Nov 23 22:49:12.104179 kubelet[2681]: I1123 22:49:12.104147 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp54c\" (UniqueName: \"kubernetes.io/projected/48b0ea66-eb86-4b97-9837-20957a4549c9-kube-api-access-zp54c\") pod \"calico-typha-b9668d594-lxmtm\" (UID: \"48b0ea66-eb86-4b97-9837-20957a4549c9\") " pod="calico-system/calico-typha-b9668d594-lxmtm" Nov 23 22:49:12.232710 systemd[1]: Created slice kubepods-besteffort-pod13c92dbe_cfba_46a7_9c9f_0acbcfbd9f84.slice - libcontainer container kubepods-besteffort-pod13c92dbe_cfba_46a7_9c9f_0acbcfbd9f84.slice. Nov 23 22:49:12.305066 kubelet[2681]: I1123 22:49:12.304834 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdmff\" (UniqueName: \"kubernetes.io/projected/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-kube-api-access-bdmff\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305066 kubelet[2681]: I1123 22:49:12.304881 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-lib-modules\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305066 kubelet[2681]: I1123 22:49:12.304897 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-tigera-ca-bundle\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305066 kubelet[2681]: I1123 22:49:12.304911 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-var-run-calico\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305066 kubelet[2681]: I1123 22:49:12.304931 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-xtables-lock\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305325 kubelet[2681]: I1123 22:49:12.304999 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-var-lib-calico\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305325 kubelet[2681]: I1123 22:49:12.305075 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-cni-net-dir\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305325 kubelet[2681]: I1123 22:49:12.305122 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-policysync\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305325 kubelet[2681]: I1123 22:49:12.305140 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-flexvol-driver-host\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305325 kubelet[2681]: I1123 22:49:12.305161 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-node-certs\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305604 kubelet[2681]: I1123 22:49:12.305229 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-cni-log-dir\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.305604 kubelet[2681]: I1123 22:49:12.305246 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84-cni-bin-dir\") pod \"calico-node-fdk2p\" (UID: \"13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84\") " pod="calico-system/calico-node-fdk2p" Nov 23 22:49:12.361742 containerd[1537]: time="2025-11-23T22:49:12.361688751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b9668d594-lxmtm,Uid:48b0ea66-eb86-4b97-9837-20957a4549c9,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:12.417745 kubelet[2681]: E1123 22:49:12.417711 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.417745 kubelet[2681]: W1123 22:49:12.417734 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.418153 kubelet[2681]: E1123 22:49:12.417758 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.418193 containerd[1537]: time="2025-11-23T22:49:12.417151328Z" level=info msg="connecting to shim ed0d57123324e31f75e8022f0709b0a01447939082bf944e2fe66a1fb94f735b" address="unix:///run/containerd/s/015593d8a545d21c9ecb1058f2fbb2102902174f50a4fcf8865dafd7081296b6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:12.421587 kubelet[2681]: E1123 22:49:12.421549 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.421587 kubelet[2681]: W1123 22:49:12.421572 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.421587 kubelet[2681]: E1123 22:49:12.421616 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.433994 kubelet[2681]: E1123 22:49:12.433220 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:12.448154 kubelet[2681]: E1123 22:49:12.448081 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.448154 kubelet[2681]: W1123 22:49:12.448105 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.448154 kubelet[2681]: E1123 22:49:12.448124 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.477478 kubelet[2681]: E1123 22:49:12.477429 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.477478 kubelet[2681]: W1123 22:49:12.477453 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.477478 kubelet[2681]: E1123 22:49:12.477487 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.477673 kubelet[2681]: E1123 22:49:12.477658 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.477718 kubelet[2681]: W1123 22:49:12.477666 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.477718 kubelet[2681]: E1123 22:49:12.477714 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.477878 kubelet[2681]: E1123 22:49:12.477864 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.477878 kubelet[2681]: W1123 22:49:12.477875 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.477935 kubelet[2681]: E1123 22:49:12.477882 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.478032 kubelet[2681]: E1123 22:49:12.478014 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.478032 kubelet[2681]: W1123 22:49:12.478025 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.478087 kubelet[2681]: E1123 22:49:12.478034 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.478203 kubelet[2681]: E1123 22:49:12.478185 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.478203 kubelet[2681]: W1123 22:49:12.478197 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.478203 kubelet[2681]: E1123 22:49:12.478204 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.478329 kubelet[2681]: E1123 22:49:12.478318 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.478329 kubelet[2681]: W1123 22:49:12.478328 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.478399 kubelet[2681]: E1123 22:49:12.478335 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.478486 kubelet[2681]: E1123 22:49:12.478471 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.478486 kubelet[2681]: W1123 22:49:12.478481 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.478486 kubelet[2681]: E1123 22:49:12.478489 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.478655 kubelet[2681]: E1123 22:49:12.478642 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.478655 kubelet[2681]: W1123 22:49:12.478653 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.478730 kubelet[2681]: E1123 22:49:12.478660 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.479087 kubelet[2681]: E1123 22:49:12.479064 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.479087 kubelet[2681]: W1123 22:49:12.479077 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.479087 kubelet[2681]: E1123 22:49:12.479087 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.479899 kubelet[2681]: E1123 22:49:12.479877 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.479899 kubelet[2681]: W1123 22:49:12.479891 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.479899 kubelet[2681]: E1123 22:49:12.479903 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.480216 kubelet[2681]: E1123 22:49:12.480203 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.480216 kubelet[2681]: W1123 22:49:12.480216 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.480275 kubelet[2681]: E1123 22:49:12.480226 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.480385 kubelet[2681]: E1123 22:49:12.480371 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.480385 kubelet[2681]: W1123 22:49:12.480382 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.480436 kubelet[2681]: E1123 22:49:12.480391 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.480675 kubelet[2681]: E1123 22:49:12.480661 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.480675 kubelet[2681]: W1123 22:49:12.480674 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.480730 kubelet[2681]: E1123 22:49:12.480685 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.481050 kubelet[2681]: E1123 22:49:12.481030 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.481050 kubelet[2681]: W1123 22:49:12.481044 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.481162 kubelet[2681]: E1123 22:49:12.481056 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.483265 kubelet[2681]: E1123 22:49:12.483234 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.483265 kubelet[2681]: W1123 22:49:12.483259 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.483353 kubelet[2681]: E1123 22:49:12.483273 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.484541 kubelet[2681]: E1123 22:49:12.484491 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.484541 kubelet[2681]: W1123 22:49:12.484524 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.484541 kubelet[2681]: E1123 22:49:12.484537 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.486638 kubelet[2681]: E1123 22:49:12.486144 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.486638 kubelet[2681]: W1123 22:49:12.486168 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.486638 kubelet[2681]: E1123 22:49:12.486182 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.489209 kubelet[2681]: E1123 22:49:12.489181 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.489209 kubelet[2681]: W1123 22:49:12.489204 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.489313 kubelet[2681]: E1123 22:49:12.489225 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.491955 kubelet[2681]: E1123 22:49:12.491433 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.491955 kubelet[2681]: W1123 22:49:12.491947 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.492032 kubelet[2681]: E1123 22:49:12.491967 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.492819 kubelet[2681]: E1123 22:49:12.492794 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.492819 kubelet[2681]: W1123 22:49:12.492812 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.492899 kubelet[2681]: E1123 22:49:12.492826 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.496757 systemd[1]: Started cri-containerd-ed0d57123324e31f75e8022f0709b0a01447939082bf944e2fe66a1fb94f735b.scope - libcontainer container ed0d57123324e31f75e8022f0709b0a01447939082bf944e2fe66a1fb94f735b. Nov 23 22:49:12.507210 kubelet[2681]: E1123 22:49:12.507178 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.507210 kubelet[2681]: W1123 22:49:12.507200 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.507210 kubelet[2681]: E1123 22:49:12.507219 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.507424 kubelet[2681]: I1123 22:49:12.507245 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12a478e1-2715-41a3-b494-6659c8d5a00c-kubelet-dir\") pod \"csi-node-driver-gdnbl\" (UID: \"12a478e1-2715-41a3-b494-6659c8d5a00c\") " pod="calico-system/csi-node-driver-gdnbl" Nov 23 22:49:12.507615 kubelet[2681]: E1123 22:49:12.507593 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.507615 kubelet[2681]: W1123 22:49:12.507611 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.507696 kubelet[2681]: E1123 22:49:12.507627 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.507696 kubelet[2681]: I1123 22:49:12.507650 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12a478e1-2715-41a3-b494-6659c8d5a00c-registration-dir\") pod \"csi-node-driver-gdnbl\" (UID: \"12a478e1-2715-41a3-b494-6659c8d5a00c\") " pod="calico-system/csi-node-driver-gdnbl" Nov 23 22:49:12.508629 kubelet[2681]: E1123 22:49:12.508610 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.508629 kubelet[2681]: W1123 22:49:12.508629 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.508684 kubelet[2681]: E1123 22:49:12.508643 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.509344 kubelet[2681]: E1123 22:49:12.509328 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.509344 kubelet[2681]: W1123 22:49:12.509342 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.509344 kubelet[2681]: E1123 22:49:12.509354 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.510690 kubelet[2681]: E1123 22:49:12.510669 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.510690 kubelet[2681]: W1123 22:49:12.510685 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.510787 kubelet[2681]: E1123 22:49:12.510698 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.510908 kubelet[2681]: E1123 22:49:12.510894 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.510908 kubelet[2681]: W1123 22:49:12.510906 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.510951 kubelet[2681]: E1123 22:49:12.510916 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.511589 kubelet[2681]: I1123 22:49:12.511558 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12a478e1-2715-41a3-b494-6659c8d5a00c-socket-dir\") pod \"csi-node-driver-gdnbl\" (UID: \"12a478e1-2715-41a3-b494-6659c8d5a00c\") " pod="calico-system/csi-node-driver-gdnbl" Nov 23 22:49:12.512483 kubelet[2681]: E1123 22:49:12.512454 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.512483 kubelet[2681]: W1123 22:49:12.512478 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.512557 kubelet[2681]: E1123 22:49:12.512494 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.512870 kubelet[2681]: E1123 22:49:12.512854 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.512870 kubelet[2681]: W1123 22:49:12.512869 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.512940 kubelet[2681]: E1123 22:49:12.512882 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.512940 kubelet[2681]: I1123 22:49:12.512904 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbsln\" (UniqueName: \"kubernetes.io/projected/12a478e1-2715-41a3-b494-6659c8d5a00c-kube-api-access-lbsln\") pod \"csi-node-driver-gdnbl\" (UID: \"12a478e1-2715-41a3-b494-6659c8d5a00c\") " pod="calico-system/csi-node-driver-gdnbl" Nov 23 22:49:12.514652 kubelet[2681]: E1123 22:49:12.514622 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.514652 kubelet[2681]: W1123 22:49:12.514647 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.514736 kubelet[2681]: E1123 22:49:12.514664 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.514736 kubelet[2681]: I1123 22:49:12.514691 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/12a478e1-2715-41a3-b494-6659c8d5a00c-varrun\") pod \"csi-node-driver-gdnbl\" (UID: \"12a478e1-2715-41a3-b494-6659c8d5a00c\") " pod="calico-system/csi-node-driver-gdnbl" Nov 23 22:49:12.515250 kubelet[2681]: E1123 22:49:12.515232 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.515282 kubelet[2681]: W1123 22:49:12.515249 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.515282 kubelet[2681]: E1123 22:49:12.515265 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.516664 kubelet[2681]: E1123 22:49:12.516642 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.516664 kubelet[2681]: W1123 22:49:12.516662 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.516744 kubelet[2681]: E1123 22:49:12.516676 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.516958 kubelet[2681]: E1123 22:49:12.516940 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.516958 kubelet[2681]: W1123 22:49:12.516955 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.517031 kubelet[2681]: E1123 22:49:12.516970 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.518660 kubelet[2681]: E1123 22:49:12.518636 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.518660 kubelet[2681]: W1123 22:49:12.518658 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.518733 kubelet[2681]: E1123 22:49:12.518671 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.518901 kubelet[2681]: E1123 22:49:12.518886 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.518901 kubelet[2681]: W1123 22:49:12.518899 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.518965 kubelet[2681]: E1123 22:49:12.518908 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.519656 kubelet[2681]: E1123 22:49:12.519628 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.519656 kubelet[2681]: W1123 22:49:12.519655 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.519728 kubelet[2681]: E1123 22:49:12.519667 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.541282 containerd[1537]: time="2025-11-23T22:49:12.541174169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fdk2p,Uid:13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:12.573883 containerd[1537]: time="2025-11-23T22:49:12.573753645Z" level=info msg="connecting to shim 3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069" address="unix:///run/containerd/s/9cd280fa9a31558abf94d6b596fe34c461863dc8bc152a1bff5e836205ac0ce0" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:12.613747 systemd[1]: Started cri-containerd-3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069.scope - libcontainer container 3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069. Nov 23 22:49:12.616226 kubelet[2681]: E1123 22:49:12.616117 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.616226 kubelet[2681]: W1123 22:49:12.616168 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.616602 kubelet[2681]: E1123 22:49:12.616528 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.618342 kubelet[2681]: E1123 22:49:12.618318 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.618886 kubelet[2681]: W1123 22:49:12.618443 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.619078 kubelet[2681]: E1123 22:49:12.618992 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.619634 kubelet[2681]: E1123 22:49:12.619594 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.619634 kubelet[2681]: W1123 22:49:12.619617 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.619634 kubelet[2681]: E1123 22:49:12.619631 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.621903 kubelet[2681]: E1123 22:49:12.621311 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.621903 kubelet[2681]: W1123 22:49:12.621332 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.621903 kubelet[2681]: E1123 22:49:12.621351 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.622901 kubelet[2681]: E1123 22:49:12.622876 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.622901 kubelet[2681]: W1123 22:49:12.622899 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.623002 kubelet[2681]: E1123 22:49:12.622915 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.624397 kubelet[2681]: E1123 22:49:12.624360 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.624397 kubelet[2681]: W1123 22:49:12.624380 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.624397 kubelet[2681]: E1123 22:49:12.624395 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.624677 kubelet[2681]: E1123 22:49:12.624607 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.624677 kubelet[2681]: W1123 22:49:12.624616 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.624677 kubelet[2681]: E1123 22:49:12.624626 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.624941 kubelet[2681]: E1123 22:49:12.624913 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.624941 kubelet[2681]: W1123 22:49:12.624928 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.624941 kubelet[2681]: E1123 22:49:12.624939 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.626396 kubelet[2681]: E1123 22:49:12.626371 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.626396 kubelet[2681]: W1123 22:49:12.626389 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.626396 kubelet[2681]: E1123 22:49:12.626404 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.627304 kubelet[2681]: E1123 22:49:12.627279 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.627304 kubelet[2681]: W1123 22:49:12.627296 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.627502 kubelet[2681]: E1123 22:49:12.627315 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.628024 kubelet[2681]: E1123 22:49:12.628002 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.628024 kubelet[2681]: W1123 22:49:12.628020 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.628126 kubelet[2681]: E1123 22:49:12.628032 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.628474 kubelet[2681]: E1123 22:49:12.628454 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.628474 kubelet[2681]: W1123 22:49:12.628469 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.628665 kubelet[2681]: E1123 22:49:12.628545 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.628850 kubelet[2681]: E1123 22:49:12.628820 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.628850 kubelet[2681]: W1123 22:49:12.628845 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.628944 kubelet[2681]: E1123 22:49:12.628857 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.629372 kubelet[2681]: E1123 22:49:12.629354 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.629372 kubelet[2681]: W1123 22:49:12.629365 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.629372 kubelet[2681]: E1123 22:49:12.629375 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.630273 kubelet[2681]: E1123 22:49:12.630191 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.630273 kubelet[2681]: W1123 22:49:12.630212 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.630273 kubelet[2681]: E1123 22:49:12.630226 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.630475 kubelet[2681]: E1123 22:49:12.630460 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.630475 kubelet[2681]: W1123 22:49:12.630473 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.630649 kubelet[2681]: E1123 22:49:12.630484 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.631211 kubelet[2681]: E1123 22:49:12.631189 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.631211 kubelet[2681]: W1123 22:49:12.631207 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.631297 kubelet[2681]: E1123 22:49:12.631220 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.631666 kubelet[2681]: E1123 22:49:12.631647 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.631815 kubelet[2681]: W1123 22:49:12.631662 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.631815 kubelet[2681]: E1123 22:49:12.631702 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.632216 kubelet[2681]: E1123 22:49:12.632192 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.632216 kubelet[2681]: W1123 22:49:12.632207 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.632216 kubelet[2681]: E1123 22:49:12.632219 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.633524 kubelet[2681]: E1123 22:49:12.633484 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.633524 kubelet[2681]: W1123 22:49:12.633518 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.633524 kubelet[2681]: E1123 22:49:12.633532 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.634075 kubelet[2681]: E1123 22:49:12.634044 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.634075 kubelet[2681]: W1123 22:49:12.634067 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.634311 kubelet[2681]: E1123 22:49:12.634083 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.634433 kubelet[2681]: E1123 22:49:12.634413 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.634433 kubelet[2681]: W1123 22:49:12.634429 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.634496 kubelet[2681]: E1123 22:49:12.634439 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.634748 kubelet[2681]: E1123 22:49:12.634723 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.634748 kubelet[2681]: W1123 22:49:12.634740 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.634748 kubelet[2681]: E1123 22:49:12.634751 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.634984 kubelet[2681]: E1123 22:49:12.634969 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.634984 kubelet[2681]: W1123 22:49:12.634981 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.634984 kubelet[2681]: E1123 22:49:12.634990 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.635176 kubelet[2681]: E1123 22:49:12.635162 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.635176 kubelet[2681]: W1123 22:49:12.635173 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.635235 kubelet[2681]: E1123 22:49:12.635181 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.649048 kubelet[2681]: E1123 22:49:12.649018 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:12.649048 kubelet[2681]: W1123 22:49:12.649039 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:12.649048 kubelet[2681]: E1123 22:49:12.649057 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:12.702719 containerd[1537]: time="2025-11-23T22:49:12.702592794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b9668d594-lxmtm,Uid:48b0ea66-eb86-4b97-9837-20957a4549c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed0d57123324e31f75e8022f0709b0a01447939082bf944e2fe66a1fb94f735b\"" Nov 23 22:49:12.705022 containerd[1537]: time="2025-11-23T22:49:12.704951708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 22:49:12.705455 containerd[1537]: time="2025-11-23T22:49:12.705360906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fdk2p,Uid:13c92dbe-cfba-46a7-9c9f-0acbcfbd9f84,Namespace:calico-system,Attempt:0,} returns sandbox id \"3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069\"" Nov 23 22:49:13.648247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487188755.mount: Deactivated successfully. Nov 23 22:49:14.368640 kubelet[2681]: E1123 22:49:14.368594 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:14.516043 containerd[1537]: time="2025-11-23T22:49:14.515966446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:14.516554 containerd[1537]: time="2025-11-23T22:49:14.516523885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 22:49:14.517458 containerd[1537]: time="2025-11-23T22:49:14.517407403Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:14.519313 containerd[1537]: time="2025-11-23T22:49:14.519284559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:14.520190 containerd[1537]: time="2025-11-23T22:49:14.520030357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.81501505s" Nov 23 22:49:14.520190 containerd[1537]: time="2025-11-23T22:49:14.520064277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 22:49:14.521178 containerd[1537]: time="2025-11-23T22:49:14.521151914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 22:49:14.542068 containerd[1537]: time="2025-11-23T22:49:14.542012627Z" level=info msg="CreateContainer within sandbox \"ed0d57123324e31f75e8022f0709b0a01447939082bf944e2fe66a1fb94f735b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 22:49:14.552748 containerd[1537]: time="2025-11-23T22:49:14.551663845Z" level=info msg="Container 948b94759d261446f324388909e6289e74965bfb46652a1d899ea3557d723c54: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:49:14.559505 containerd[1537]: time="2025-11-23T22:49:14.559434028Z" level=info msg="CreateContainer within sandbox \"ed0d57123324e31f75e8022f0709b0a01447939082bf944e2fe66a1fb94f735b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"948b94759d261446f324388909e6289e74965bfb46652a1d899ea3557d723c54\"" Nov 23 22:49:14.560728 containerd[1537]: time="2025-11-23T22:49:14.560700545Z" level=info msg="StartContainer for \"948b94759d261446f324388909e6289e74965bfb46652a1d899ea3557d723c54\"" Nov 23 22:49:14.570706 containerd[1537]: time="2025-11-23T22:49:14.570636842Z" level=info msg="connecting to shim 948b94759d261446f324388909e6289e74965bfb46652a1d899ea3557d723c54" address="unix:///run/containerd/s/015593d8a545d21c9ecb1058f2fbb2102902174f50a4fcf8865dafd7081296b6" protocol=ttrpc version=3 Nov 23 22:49:14.605986 systemd[1]: Started cri-containerd-948b94759d261446f324388909e6289e74965bfb46652a1d899ea3557d723c54.scope - libcontainer container 948b94759d261446f324388909e6289e74965bfb46652a1d899ea3557d723c54. Nov 23 22:49:14.677884 containerd[1537]: time="2025-11-23T22:49:14.677304401Z" level=info msg="StartContainer for \"948b94759d261446f324388909e6289e74965bfb46652a1d899ea3557d723c54\" returns successfully" Nov 23 22:49:15.509635 kubelet[2681]: E1123 22:49:15.509591 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.509635 kubelet[2681]: W1123 22:49:15.509616 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.509635 kubelet[2681]: E1123 22:49:15.509638 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.510098 kubelet[2681]: E1123 22:49:15.509814 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.510098 kubelet[2681]: W1123 22:49:15.509821 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.510098 kubelet[2681]: E1123 22:49:15.509830 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.510098 kubelet[2681]: E1123 22:49:15.509977 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.510098 kubelet[2681]: W1123 22:49:15.509985 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.510098 kubelet[2681]: E1123 22:49:15.509993 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.510230 kubelet[2681]: E1123 22:49:15.510138 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.510230 kubelet[2681]: W1123 22:49:15.510145 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.510230 kubelet[2681]: E1123 22:49:15.510161 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.510462 kubelet[2681]: E1123 22:49:15.510436 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.510462 kubelet[2681]: W1123 22:49:15.510454 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.510534 kubelet[2681]: E1123 22:49:15.510464 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.510699 kubelet[2681]: E1123 22:49:15.510685 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.510699 kubelet[2681]: W1123 22:49:15.510698 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.510755 kubelet[2681]: E1123 22:49:15.510709 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.510920 kubelet[2681]: E1123 22:49:15.510905 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.510920 kubelet[2681]: W1123 22:49:15.510917 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.510973 kubelet[2681]: E1123 22:49:15.510926 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.511123 kubelet[2681]: E1123 22:49:15.511110 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.511123 kubelet[2681]: W1123 22:49:15.511120 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.511170 kubelet[2681]: E1123 22:49:15.511128 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.511367 kubelet[2681]: E1123 22:49:15.511333 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.511367 kubelet[2681]: W1123 22:49:15.511344 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.511367 kubelet[2681]: E1123 22:49:15.511352 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.511542 kubelet[2681]: E1123 22:49:15.511524 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.511542 kubelet[2681]: W1123 22:49:15.511541 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.511605 kubelet[2681]: E1123 22:49:15.511554 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.511734 kubelet[2681]: E1123 22:49:15.511708 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.511734 kubelet[2681]: W1123 22:49:15.511720 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.511734 kubelet[2681]: E1123 22:49:15.511727 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.511978 kubelet[2681]: E1123 22:49:15.511965 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.511978 kubelet[2681]: W1123 22:49:15.511977 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.512022 kubelet[2681]: E1123 22:49:15.511985 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.512162 kubelet[2681]: E1123 22:49:15.512151 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.512187 kubelet[2681]: W1123 22:49:15.512162 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.512187 kubelet[2681]: E1123 22:49:15.512170 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.512301 kubelet[2681]: E1123 22:49:15.512290 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.512323 kubelet[2681]: W1123 22:49:15.512300 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.512323 kubelet[2681]: E1123 22:49:15.512308 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.512459 kubelet[2681]: E1123 22:49:15.512449 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.512489 kubelet[2681]: W1123 22:49:15.512459 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.512489 kubelet[2681]: E1123 22:49:15.512466 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.552794 kubelet[2681]: E1123 22:49:15.552763 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.552794 kubelet[2681]: W1123 22:49:15.552787 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.552794 kubelet[2681]: E1123 22:49:15.552807 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.553019 kubelet[2681]: E1123 22:49:15.553005 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.553050 kubelet[2681]: W1123 22:49:15.553021 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.553050 kubelet[2681]: E1123 22:49:15.553032 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.553285 kubelet[2681]: E1123 22:49:15.553264 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.553285 kubelet[2681]: W1123 22:49:15.553284 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.553344 kubelet[2681]: E1123 22:49:15.553298 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.553499 kubelet[2681]: E1123 22:49:15.553486 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.553499 kubelet[2681]: W1123 22:49:15.553497 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.553598 kubelet[2681]: E1123 22:49:15.553521 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.553726 kubelet[2681]: E1123 22:49:15.553713 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.553726 kubelet[2681]: W1123 22:49:15.553724 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.553787 kubelet[2681]: E1123 22:49:15.553732 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.553944 kubelet[2681]: E1123 22:49:15.553931 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.553971 kubelet[2681]: W1123 22:49:15.553948 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.553971 kubelet[2681]: E1123 22:49:15.553957 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.559004 kubelet[2681]: E1123 22:49:15.558976 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.559068 kubelet[2681]: W1123 22:49:15.559003 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.559301 kubelet[2681]: E1123 22:49:15.559283 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.559708 kubelet[2681]: E1123 22:49:15.559688 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.559748 kubelet[2681]: W1123 22:49:15.559708 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.559748 kubelet[2681]: E1123 22:49:15.559723 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.560032 kubelet[2681]: E1123 22:49:15.560018 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.560062 kubelet[2681]: W1123 22:49:15.560032 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.560102 kubelet[2681]: E1123 22:49:15.560088 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.560392 kubelet[2681]: E1123 22:49:15.560378 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.560427 kubelet[2681]: W1123 22:49:15.560392 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.560427 kubelet[2681]: E1123 22:49:15.560404 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.560817 kubelet[2681]: E1123 22:49:15.560801 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.560857 kubelet[2681]: W1123 22:49:15.560817 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.560857 kubelet[2681]: E1123 22:49:15.560829 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.561104 kubelet[2681]: E1123 22:49:15.561090 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.561138 kubelet[2681]: W1123 22:49:15.561104 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.561138 kubelet[2681]: E1123 22:49:15.561115 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.561348 kubelet[2681]: E1123 22:49:15.561328 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.561348 kubelet[2681]: W1123 22:49:15.561343 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.561406 kubelet[2681]: E1123 22:49:15.561353 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.561721 kubelet[2681]: E1123 22:49:15.561644 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.561755 kubelet[2681]: W1123 22:49:15.561723 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.561755 kubelet[2681]: E1123 22:49:15.561740 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.562109 kubelet[2681]: E1123 22:49:15.562072 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.562143 kubelet[2681]: W1123 22:49:15.562111 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.562143 kubelet[2681]: E1123 22:49:15.562125 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.562366 kubelet[2681]: E1123 22:49:15.562349 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.562366 kubelet[2681]: W1123 22:49:15.562363 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.562425 kubelet[2681]: E1123 22:49:15.562376 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.562971 kubelet[2681]: E1123 22:49:15.562952 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.562971 kubelet[2681]: W1123 22:49:15.562970 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.563045 kubelet[2681]: E1123 22:49:15.562983 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.563198 kubelet[2681]: E1123 22:49:15.563184 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:49:15.563198 kubelet[2681]: W1123 22:49:15.563195 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:49:15.563253 kubelet[2681]: E1123 22:49:15.563205 2681 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:49:15.781576 containerd[1537]: time="2025-11-23T22:49:15.781391734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:15.783358 containerd[1537]: time="2025-11-23T22:49:15.782031212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 22:49:15.783358 containerd[1537]: time="2025-11-23T22:49:15.783131370Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:15.787432 containerd[1537]: time="2025-11-23T22:49:15.787361001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:15.789364 containerd[1537]: time="2025-11-23T22:49:15.789322997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.268138963s" Nov 23 22:49:15.789364 containerd[1537]: time="2025-11-23T22:49:15.789361917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 22:49:15.794099 containerd[1537]: time="2025-11-23T22:49:15.793971507Z" level=info msg="CreateContainer within sandbox \"3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 22:49:15.801267 containerd[1537]: time="2025-11-23T22:49:15.801219291Z" level=info msg="Container 3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:49:15.816866 containerd[1537]: time="2025-11-23T22:49:15.816800138Z" level=info msg="CreateContainer within sandbox \"3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95\"" Nov 23 22:49:15.817404 containerd[1537]: time="2025-11-23T22:49:15.817383977Z" level=info msg="StartContainer for \"3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95\"" Nov 23 22:49:15.820943 containerd[1537]: time="2025-11-23T22:49:15.820890210Z" level=info msg="connecting to shim 3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95" address="unix:///run/containerd/s/9cd280fa9a31558abf94d6b596fe34c461863dc8bc152a1bff5e836205ac0ce0" protocol=ttrpc version=3 Nov 23 22:49:15.852764 systemd[1]: Started cri-containerd-3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95.scope - libcontainer container 3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95. Nov 23 22:49:15.954015 containerd[1537]: time="2025-11-23T22:49:15.953885648Z" level=info msg="StartContainer for \"3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95\" returns successfully" Nov 23 22:49:15.967366 systemd[1]: cri-containerd-3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95.scope: Deactivated successfully. Nov 23 22:49:15.967695 systemd[1]: cri-containerd-3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95.scope: Consumed 36ms CPU time, 6.3M memory peak, 4.5M written to disk. Nov 23 22:49:15.986357 containerd[1537]: time="2025-11-23T22:49:15.986288259Z" level=info msg="received container exit event container_id:\"3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95\" id:\"3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95\" pid:3389 exited_at:{seconds:1763938155 nanos:979762553}" Nov 23 22:49:16.035905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d7d65d1887f6d5c3ffe30cfa4664be7d52bb7870ce4122d5e685f36cf3d1c95-rootfs.mount: Deactivated successfully. Nov 23 22:49:16.365009 kubelet[2681]: E1123 22:49:16.364964 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:16.461252 kubelet[2681]: I1123 22:49:16.461222 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 22:49:16.462478 containerd[1537]: time="2025-11-23T22:49:16.462449870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 22:49:16.480619 kubelet[2681]: I1123 22:49:16.480548 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b9668d594-lxmtm" podStartSLOduration=2.664039508 podStartE2EDuration="4.480531914s" podCreationTimestamp="2025-11-23 22:49:12 +0000 UTC" firstStartedPulling="2025-11-23 22:49:12.704465629 +0000 UTC m=+22.442919164" lastFinishedPulling="2025-11-23 22:49:14.520957995 +0000 UTC m=+24.259411570" observedRunningTime="2025-11-23 22:49:15.47668434 +0000 UTC m=+25.215137955" watchObservedRunningTime="2025-11-23 22:49:16.480531914 +0000 UTC m=+26.218985529" Nov 23 22:49:18.081979 containerd[1537]: time="2025-11-23T22:49:18.081926354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:18.082929 containerd[1537]: time="2025-11-23T22:49:18.082711952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 22:49:18.084318 containerd[1537]: time="2025-11-23T22:49:18.084274710Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:18.087339 containerd[1537]: time="2025-11-23T22:49:18.087302784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:18.088079 containerd[1537]: time="2025-11-23T22:49:18.088021943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.625532193s" Nov 23 22:49:18.088169 containerd[1537]: time="2025-11-23T22:49:18.088154903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 22:49:18.093477 containerd[1537]: time="2025-11-23T22:49:18.093381534Z" level=info msg="CreateContainer within sandbox \"3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 22:49:18.103720 containerd[1537]: time="2025-11-23T22:49:18.103670516Z" level=info msg="Container d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:49:18.117281 containerd[1537]: time="2025-11-23T22:49:18.117232852Z" level=info msg="CreateContainer within sandbox \"3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990\"" Nov 23 22:49:18.117937 containerd[1537]: time="2025-11-23T22:49:18.117890251Z" level=info msg="StartContainer for \"d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990\"" Nov 23 22:49:18.119697 containerd[1537]: time="2025-11-23T22:49:18.119668928Z" level=info msg="connecting to shim d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990" address="unix:///run/containerd/s/9cd280fa9a31558abf94d6b596fe34c461863dc8bc152a1bff5e836205ac0ce0" protocol=ttrpc version=3 Nov 23 22:49:18.147749 systemd[1]: Started cri-containerd-d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990.scope - libcontainer container d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990. Nov 23 22:49:18.234545 containerd[1537]: time="2025-11-23T22:49:18.234264887Z" level=info msg="StartContainer for \"d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990\" returns successfully" Nov 23 22:49:18.365079 kubelet[2681]: E1123 22:49:18.364984 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:18.951254 systemd[1]: cri-containerd-d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990.scope: Deactivated successfully. Nov 23 22:49:18.951829 systemd[1]: cri-containerd-d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990.scope: Consumed 460ms CPU time, 177.7M memory peak, 2.2M read from disk, 165.9M written to disk. Nov 23 22:49:18.965684 containerd[1537]: time="2025-11-23T22:49:18.965636289Z" level=info msg="received container exit event container_id:\"d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990\" id:\"d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990\" pid:3448 exited_at:{seconds:1763938158 nanos:965370010}" Nov 23 22:49:18.996186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1a87d3ad92ca7be03aefe29bd3d1ddabb1a3495f51ad79fe532608fb914c990-rootfs.mount: Deactivated successfully. Nov 23 22:49:19.033900 kubelet[2681]: I1123 22:49:19.033862 2681 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 23 22:49:19.077047 systemd[1]: Created slice kubepods-besteffort-pod39d3d1e2_118e_4d70_b8bc_8d53aea2707c.slice - libcontainer container kubepods-besteffort-pod39d3d1e2_118e_4d70_b8bc_8d53aea2707c.slice. Nov 23 22:49:19.087156 systemd[1]: Created slice kubepods-besteffort-pod9b3997f9_79ca_4cb6_accf_cb8679793167.slice - libcontainer container kubepods-besteffort-pod9b3997f9_79ca_4cb6_accf_cb8679793167.slice. Nov 23 22:49:19.096721 systemd[1]: Created slice kubepods-burstable-pod2bae50f8_5c67_4c93_8d30_44e790923f61.slice - libcontainer container kubepods-burstable-pod2bae50f8_5c67_4c93_8d30_44e790923f61.slice. Nov 23 22:49:19.108372 systemd[1]: Created slice kubepods-burstable-pod68e110ca_ba29_43c4_bb8a_7769dd4f462e.slice - libcontainer container kubepods-burstable-pod68e110ca_ba29_43c4_bb8a_7769dd4f462e.slice. Nov 23 22:49:19.112698 systemd[1]: Created slice kubepods-besteffort-pod02c22706_9c4d_4a45_b31f_a84083423193.slice - libcontainer container kubepods-besteffort-pod02c22706_9c4d_4a45_b31f_a84083423193.slice. Nov 23 22:49:19.118419 systemd[1]: Created slice kubepods-besteffort-pod677210a8_7e3f_4eb7_b133_c4888088b528.slice - libcontainer container kubepods-besteffort-pod677210a8_7e3f_4eb7_b133_c4888088b528.slice. Nov 23 22:49:19.124567 systemd[1]: Created slice kubepods-besteffort-pod927e7ec4_bea4_44ce_a267_dd04bc352b11.slice - libcontainer container kubepods-besteffort-pod927e7ec4_bea4_44ce_a267_dd04bc352b11.slice. Nov 23 22:49:19.185190 kubelet[2681]: I1123 22:49:19.185149 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02c22706-9c4d-4a45-b31f-a84083423193-calico-apiserver-certs\") pod \"calico-apiserver-5f859fc9fb-gcdhb\" (UID: \"02c22706-9c4d-4a45-b31f-a84083423193\") " pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" Nov 23 22:49:19.185190 kubelet[2681]: I1123 22:49:19.185195 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/677210a8-7e3f-4eb7-b133-c4888088b528-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-8r8zz\" (UID: \"677210a8-7e3f-4eb7-b133-c4888088b528\") " pod="calico-system/goldmane-7c778bb748-8r8zz" Nov 23 22:49:19.185359 kubelet[2681]: I1123 22:49:19.185221 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bae50f8-5c67-4c93-8d30-44e790923f61-config-volume\") pod \"coredns-66bc5c9577-msstk\" (UID: \"2bae50f8-5c67-4c93-8d30-44e790923f61\") " pod="kube-system/coredns-66bc5c9577-msstk" Nov 23 22:49:19.185359 kubelet[2681]: I1123 22:49:19.185267 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777p2\" (UniqueName: \"kubernetes.io/projected/927e7ec4-bea4-44ce-a267-dd04bc352b11-kube-api-access-777p2\") pod \"calico-kube-controllers-f7b67686c-c7z45\" (UID: \"927e7ec4-bea4-44ce-a267-dd04bc352b11\") " pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" Nov 23 22:49:19.185413 kubelet[2681]: I1123 22:49:19.185380 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-ca-bundle\") pod \"whisker-5df6c89f4-ddqwg\" (UID: \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\") " pod="calico-system/whisker-5df6c89f4-ddqwg" Nov 23 22:49:19.185440 kubelet[2681]: I1123 22:49:19.185411 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvc4d\" (UniqueName: \"kubernetes.io/projected/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-kube-api-access-rvc4d\") pod \"whisker-5df6c89f4-ddqwg\" (UID: \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\") " pod="calico-system/whisker-5df6c89f4-ddqwg" Nov 23 22:49:19.185440 kubelet[2681]: I1123 22:49:19.185430 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtz9\" (UniqueName: \"kubernetes.io/projected/02c22706-9c4d-4a45-b31f-a84083423193-kube-api-access-wrtz9\") pod \"calico-apiserver-5f859fc9fb-gcdhb\" (UID: \"02c22706-9c4d-4a45-b31f-a84083423193\") " pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" Nov 23 22:49:19.185489 kubelet[2681]: I1123 22:49:19.185462 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/677210a8-7e3f-4eb7-b133-c4888088b528-config\") pod \"goldmane-7c778bb748-8r8zz\" (UID: \"677210a8-7e3f-4eb7-b133-c4888088b528\") " pod="calico-system/goldmane-7c778bb748-8r8zz" Nov 23 22:49:19.185543 kubelet[2681]: I1123 22:49:19.185526 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b3997f9-79ca-4cb6-accf-cb8679793167-calico-apiserver-certs\") pod \"calico-apiserver-5f859fc9fb-rjhsm\" (UID: \"9b3997f9-79ca-4cb6-accf-cb8679793167\") " pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" Nov 23 22:49:19.185587 kubelet[2681]: I1123 22:49:19.185563 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-backend-key-pair\") pod \"whisker-5df6c89f4-ddqwg\" (UID: \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\") " pod="calico-system/whisker-5df6c89f4-ddqwg" Nov 23 22:49:19.185619 kubelet[2681]: I1123 22:49:19.185591 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgj8g\" (UniqueName: \"kubernetes.io/projected/2bae50f8-5c67-4c93-8d30-44e790923f61-kube-api-access-dgj8g\") pod \"coredns-66bc5c9577-msstk\" (UID: \"2bae50f8-5c67-4c93-8d30-44e790923f61\") " pod="kube-system/coredns-66bc5c9577-msstk" Nov 23 22:49:19.185644 kubelet[2681]: I1123 22:49:19.185624 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927e7ec4-bea4-44ce-a267-dd04bc352b11-tigera-ca-bundle\") pod \"calico-kube-controllers-f7b67686c-c7z45\" (UID: \"927e7ec4-bea4-44ce-a267-dd04bc352b11\") " pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" Nov 23 22:49:19.185644 kubelet[2681]: I1123 22:49:19.185641 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e110ca-ba29-43c4-bb8a-7769dd4f462e-config-volume\") pod \"coredns-66bc5c9577-rhv7d\" (UID: \"68e110ca-ba29-43c4-bb8a-7769dd4f462e\") " pod="kube-system/coredns-66bc5c9577-rhv7d" Nov 23 22:49:19.185711 kubelet[2681]: I1123 22:49:19.185656 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knb7\" (UniqueName: \"kubernetes.io/projected/68e110ca-ba29-43c4-bb8a-7769dd4f462e-kube-api-access-5knb7\") pod \"coredns-66bc5c9577-rhv7d\" (UID: \"68e110ca-ba29-43c4-bb8a-7769dd4f462e\") " pod="kube-system/coredns-66bc5c9577-rhv7d" Nov 23 22:49:19.185711 kubelet[2681]: I1123 22:49:19.185671 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/677210a8-7e3f-4eb7-b133-c4888088b528-goldmane-key-pair\") pod \"goldmane-7c778bb748-8r8zz\" (UID: \"677210a8-7e3f-4eb7-b133-c4888088b528\") " pod="calico-system/goldmane-7c778bb748-8r8zz" Nov 23 22:49:19.185711 kubelet[2681]: I1123 22:49:19.185703 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwbn\" (UniqueName: \"kubernetes.io/projected/677210a8-7e3f-4eb7-b133-c4888088b528-kube-api-access-6xwbn\") pod \"goldmane-7c778bb748-8r8zz\" (UID: \"677210a8-7e3f-4eb7-b133-c4888088b528\") " pod="calico-system/goldmane-7c778bb748-8r8zz" Nov 23 22:49:19.185796 kubelet[2681]: I1123 22:49:19.185729 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnvb5\" (UniqueName: \"kubernetes.io/projected/9b3997f9-79ca-4cb6-accf-cb8679793167-kube-api-access-hnvb5\") pod \"calico-apiserver-5f859fc9fb-rjhsm\" (UID: \"9b3997f9-79ca-4cb6-accf-cb8679793167\") " pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" Nov 23 22:49:19.384779 containerd[1537]: time="2025-11-23T22:49:19.384731479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df6c89f4-ddqwg,Uid:39d3d1e2-118e-4d70-b8bc-8d53aea2707c,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:19.395826 containerd[1537]: time="2025-11-23T22:49:19.393967864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-rjhsm,Uid:9b3997f9-79ca-4cb6-accf-cb8679793167,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:49:19.409379 containerd[1537]: time="2025-11-23T22:49:19.409283959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-msstk,Uid:2bae50f8-5c67-4c93-8d30-44e790923f61,Namespace:kube-system,Attempt:0,}" Nov 23 22:49:19.417611 containerd[1537]: time="2025-11-23T22:49:19.417562185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rhv7d,Uid:68e110ca-ba29-43c4-bb8a-7769dd4f462e,Namespace:kube-system,Attempt:0,}" Nov 23 22:49:19.419874 containerd[1537]: time="2025-11-23T22:49:19.419821461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-gcdhb,Uid:02c22706-9c4d-4a45-b31f-a84083423193,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:49:19.424717 containerd[1537]: time="2025-11-23T22:49:19.424166334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r8zz,Uid:677210a8-7e3f-4eb7-b133-c4888088b528,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:19.431262 containerd[1537]: time="2025-11-23T22:49:19.431197803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7b67686c-c7z45,Uid:927e7ec4-bea4-44ce-a267-dd04bc352b11,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:19.485830 containerd[1537]: time="2025-11-23T22:49:19.485789633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 22:49:19.539740 containerd[1537]: time="2025-11-23T22:49:19.539686105Z" level=error msg="Failed to destroy network for sandbox \"d7fe8bd86669e773eb562555871af44156f75abe6171a7298451b2e9af61179e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.544917 containerd[1537]: time="2025-11-23T22:49:19.544857937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-msstk,Uid:2bae50f8-5c67-4c93-8d30-44e790923f61,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7fe8bd86669e773eb562555871af44156f75abe6171a7298451b2e9af61179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.545418 kubelet[2681]: E1123 22:49:19.545365 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7fe8bd86669e773eb562555871af44156f75abe6171a7298451b2e9af61179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.545773 kubelet[2681]: E1123 22:49:19.545437 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7fe8bd86669e773eb562555871af44156f75abe6171a7298451b2e9af61179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-msstk" Nov 23 22:49:19.545773 kubelet[2681]: E1123 22:49:19.545458 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7fe8bd86669e773eb562555871af44156f75abe6171a7298451b2e9af61179e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-msstk" Nov 23 22:49:19.545773 kubelet[2681]: E1123 22:49:19.545523 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-msstk_kube-system(2bae50f8-5c67-4c93-8d30-44e790923f61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-msstk_kube-system(2bae50f8-5c67-4c93-8d30-44e790923f61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7fe8bd86669e773eb562555871af44156f75abe6171a7298451b2e9af61179e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-msstk" podUID="2bae50f8-5c67-4c93-8d30-44e790923f61" Nov 23 22:49:19.555839 containerd[1537]: time="2025-11-23T22:49:19.555747599Z" level=error msg="Failed to destroy network for sandbox \"dddd2472364afb9c661513b0a66a30afe1da903a3ea560fa4a09da4608e368b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.559296 containerd[1537]: time="2025-11-23T22:49:19.559211913Z" level=error msg="Failed to destroy network for sandbox \"7cfe9a7d9ef7bb0c009de2e2d6b141fcc244154535b529c49a93e69a135a5dea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.559798 containerd[1537]: time="2025-11-23T22:49:19.559604792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df6c89f4-ddqwg,Uid:39d3d1e2-118e-4d70-b8bc-8d53aea2707c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddd2472364afb9c661513b0a66a30afe1da903a3ea560fa4a09da4608e368b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.559936 kubelet[2681]: E1123 22:49:19.559867 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddd2472364afb9c661513b0a66a30afe1da903a3ea560fa4a09da4608e368b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.559936 kubelet[2681]: E1123 22:49:19.559919 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddd2472364afb9c661513b0a66a30afe1da903a3ea560fa4a09da4608e368b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df6c89f4-ddqwg" Nov 23 22:49:19.559996 kubelet[2681]: E1123 22:49:19.559941 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dddd2472364afb9c661513b0a66a30afe1da903a3ea560fa4a09da4608e368b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5df6c89f4-ddqwg" Nov 23 22:49:19.560086 kubelet[2681]: E1123 22:49:19.560019 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5df6c89f4-ddqwg_calico-system(39d3d1e2-118e-4d70-b8bc-8d53aea2707c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5df6c89f4-ddqwg_calico-system(39d3d1e2-118e-4d70-b8bc-8d53aea2707c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dddd2472364afb9c661513b0a66a30afe1da903a3ea560fa4a09da4608e368b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5df6c89f4-ddqwg" podUID="39d3d1e2-118e-4d70-b8bc-8d53aea2707c" Nov 23 22:49:19.560596 containerd[1537]: time="2025-11-23T22:49:19.560548871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rhv7d,Uid:68e110ca-ba29-43c4-bb8a-7769dd4f462e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfe9a7d9ef7bb0c009de2e2d6b141fcc244154535b529c49a93e69a135a5dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.560748 kubelet[2681]: E1123 22:49:19.560718 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfe9a7d9ef7bb0c009de2e2d6b141fcc244154535b529c49a93e69a135a5dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.560785 kubelet[2681]: E1123 22:49:19.560758 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfe9a7d9ef7bb0c009de2e2d6b141fcc244154535b529c49a93e69a135a5dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rhv7d" Nov 23 22:49:19.560812 kubelet[2681]: E1123 22:49:19.560775 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfe9a7d9ef7bb0c009de2e2d6b141fcc244154535b529c49a93e69a135a5dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rhv7d" Nov 23 22:49:19.560901 kubelet[2681]: E1123 22:49:19.560824 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rhv7d_kube-system(68e110ca-ba29-43c4-bb8a-7769dd4f462e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rhv7d_kube-system(68e110ca-ba29-43c4-bb8a-7769dd4f462e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cfe9a7d9ef7bb0c009de2e2d6b141fcc244154535b529c49a93e69a135a5dea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rhv7d" podUID="68e110ca-ba29-43c4-bb8a-7769dd4f462e" Nov 23 22:49:19.563685 containerd[1537]: time="2025-11-23T22:49:19.563640546Z" level=error msg="Failed to destroy network for sandbox \"15ad28a64bd9fc9175757a457c3c197305b63f380c894622c3b06e683e076e30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.566210 containerd[1537]: time="2025-11-23T22:49:19.566162942Z" level=error msg="Failed to destroy network for sandbox \"c67104a44622486685eebb91350e5e2c1af0083a19001bf12e5ab9cc9cf9bf16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.567234 containerd[1537]: time="2025-11-23T22:49:19.567136620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-rjhsm,Uid:9b3997f9-79ca-4cb6-accf-cb8679793167,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ad28a64bd9fc9175757a457c3c197305b63f380c894622c3b06e683e076e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.567666 kubelet[2681]: E1123 22:49:19.567627 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ad28a64bd9fc9175757a457c3c197305b63f380c894622c3b06e683e076e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.567729 kubelet[2681]: E1123 22:49:19.567685 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ad28a64bd9fc9175757a457c3c197305b63f380c894622c3b06e683e076e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" Nov 23 22:49:19.567729 kubelet[2681]: E1123 22:49:19.567705 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ad28a64bd9fc9175757a457c3c197305b63f380c894622c3b06e683e076e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" Nov 23 22:49:19.567802 kubelet[2681]: E1123 22:49:19.567773 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f859fc9fb-rjhsm_calico-apiserver(9b3997f9-79ca-4cb6-accf-cb8679793167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f859fc9fb-rjhsm_calico-apiserver(9b3997f9-79ca-4cb6-accf-cb8679793167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15ad28a64bd9fc9175757a457c3c197305b63f380c894622c3b06e683e076e30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" podUID="9b3997f9-79ca-4cb6-accf-cb8679793167" Nov 23 22:49:19.568408 containerd[1537]: time="2025-11-23T22:49:19.568367498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r8zz,Uid:677210a8-7e3f-4eb7-b133-c4888088b528,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67104a44622486685eebb91350e5e2c1af0083a19001bf12e5ab9cc9cf9bf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.568892 kubelet[2681]: E1123 22:49:19.568627 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67104a44622486685eebb91350e5e2c1af0083a19001bf12e5ab9cc9cf9bf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.568892 kubelet[2681]: E1123 22:49:19.568740 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67104a44622486685eebb91350e5e2c1af0083a19001bf12e5ab9cc9cf9bf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8r8zz" Nov 23 22:49:19.568892 kubelet[2681]: E1123 22:49:19.568763 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67104a44622486685eebb91350e5e2c1af0083a19001bf12e5ab9cc9cf9bf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8r8zz" Nov 23 22:49:19.568995 kubelet[2681]: E1123 22:49:19.568811 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-8r8zz_calico-system(677210a8-7e3f-4eb7-b133-c4888088b528)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-8r8zz_calico-system(677210a8-7e3f-4eb7-b133-c4888088b528)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c67104a44622486685eebb91350e5e2c1af0083a19001bf12e5ab9cc9cf9bf16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-8r8zz" podUID="677210a8-7e3f-4eb7-b133-c4888088b528" Nov 23 22:49:19.569272 containerd[1537]: time="2025-11-23T22:49:19.569224017Z" level=error msg="Failed to destroy network for sandbox \"df0da3c0f45c3fc55de309482c84bbb1c2cfa76c9f31219f80b054caea5a40f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.570561 containerd[1537]: time="2025-11-23T22:49:19.570485415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7b67686c-c7z45,Uid:927e7ec4-bea4-44ce-a267-dd04bc352b11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0da3c0f45c3fc55de309482c84bbb1c2cfa76c9f31219f80b054caea5a40f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.570751 kubelet[2681]: E1123 22:49:19.570716 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0da3c0f45c3fc55de309482c84bbb1c2cfa76c9f31219f80b054caea5a40f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.570797 kubelet[2681]: E1123 22:49:19.570768 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0da3c0f45c3fc55de309482c84bbb1c2cfa76c9f31219f80b054caea5a40f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" Nov 23 22:49:19.570797 kubelet[2681]: E1123 22:49:19.570791 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0da3c0f45c3fc55de309482c84bbb1c2cfa76c9f31219f80b054caea5a40f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" Nov 23 22:49:19.570911 kubelet[2681]: E1123 22:49:19.570845 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f7b67686c-c7z45_calico-system(927e7ec4-bea4-44ce-a267-dd04bc352b11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f7b67686c-c7z45_calico-system(927e7ec4-bea4-44ce-a267-dd04bc352b11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df0da3c0f45c3fc55de309482c84bbb1c2cfa76c9f31219f80b054caea5a40f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" podUID="927e7ec4-bea4-44ce-a267-dd04bc352b11" Nov 23 22:49:19.573791 containerd[1537]: time="2025-11-23T22:49:19.573751249Z" level=error msg="Failed to destroy network for sandbox \"0b01e81080a43c2105864093861c4fe97f0c8776846d22758bd6bf1d3a7954c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.575073 containerd[1537]: time="2025-11-23T22:49:19.575021167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-gcdhb,Uid:02c22706-9c4d-4a45-b31f-a84083423193,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b01e81080a43c2105864093861c4fe97f0c8776846d22758bd6bf1d3a7954c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.575609 kubelet[2681]: E1123 22:49:19.575527 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b01e81080a43c2105864093861c4fe97f0c8776846d22758bd6bf1d3a7954c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:19.575609 kubelet[2681]: E1123 22:49:19.575589 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b01e81080a43c2105864093861c4fe97f0c8776846d22758bd6bf1d3a7954c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" Nov 23 22:49:19.575609 kubelet[2681]: E1123 22:49:19.575612 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b01e81080a43c2105864093861c4fe97f0c8776846d22758bd6bf1d3a7954c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" Nov 23 22:49:19.575875 kubelet[2681]: E1123 22:49:19.575665 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f859fc9fb-gcdhb_calico-apiserver(02c22706-9c4d-4a45-b31f-a84083423193)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f859fc9fb-gcdhb_calico-apiserver(02c22706-9c4d-4a45-b31f-a84083423193)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b01e81080a43c2105864093861c4fe97f0c8776846d22758bd6bf1d3a7954c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" podUID="02c22706-9c4d-4a45-b31f-a84083423193" Nov 23 22:49:20.393442 systemd[1]: Created slice kubepods-besteffort-pod12a478e1_2715_41a3_b494_6659c8d5a00c.slice - libcontainer container kubepods-besteffort-pod12a478e1_2715_41a3_b494_6659c8d5a00c.slice. Nov 23 22:49:20.403853 containerd[1537]: time="2025-11-23T22:49:20.403802731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gdnbl,Uid:12a478e1-2715-41a3-b494-6659c8d5a00c,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:20.480540 containerd[1537]: time="2025-11-23T22:49:20.480469893Z" level=error msg="Failed to destroy network for sandbox \"322b260a7bc8605edff2713091a30618b15531671dc70e6575de55f689444f59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:20.482738 systemd[1]: run-netns-cni\x2d4603f395\x2d650e\x2daedb\x2d261c\x2d9cb25d207fd3.mount: Deactivated successfully. Nov 23 22:49:20.506630 containerd[1537]: time="2025-11-23T22:49:20.506505213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gdnbl,Uid:12a478e1-2715-41a3-b494-6659c8d5a00c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"322b260a7bc8605edff2713091a30618b15531671dc70e6575de55f689444f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:20.507122 kubelet[2681]: E1123 22:49:20.507049 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322b260a7bc8605edff2713091a30618b15531671dc70e6575de55f689444f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:49:20.507122 kubelet[2681]: E1123 22:49:20.507135 2681 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322b260a7bc8605edff2713091a30618b15531671dc70e6575de55f689444f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gdnbl" Nov 23 22:49:20.507237 kubelet[2681]: E1123 22:49:20.507154 2681 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322b260a7bc8605edff2713091a30618b15531671dc70e6575de55f689444f59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gdnbl" Nov 23 22:49:20.507237 kubelet[2681]: E1123 22:49:20.507211 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gdnbl_calico-system(12a478e1-2715-41a3-b494-6659c8d5a00c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gdnbl_calico-system(12a478e1-2715-41a3-b494-6659c8d5a00c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"322b260a7bc8605edff2713091a30618b15531671dc70e6575de55f689444f59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:22.522698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286102755.mount: Deactivated successfully. Nov 23 22:49:22.776489 containerd[1537]: time="2025-11-23T22:49:22.775815847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:22.777303 containerd[1537]: time="2025-11-23T22:49:22.776764526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 22:49:22.778004 containerd[1537]: time="2025-11-23T22:49:22.777974684Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:22.780217 containerd[1537]: time="2025-11-23T22:49:22.780152562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:49:22.781144 containerd[1537]: time="2025-11-23T22:49:22.781011680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.295185847s" Nov 23 22:49:22.781144 containerd[1537]: time="2025-11-23T22:49:22.781052400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 22:49:22.831025 containerd[1537]: time="2025-11-23T22:49:22.830982773Z" level=info msg="CreateContainer within sandbox \"3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 22:49:22.875549 containerd[1537]: time="2025-11-23T22:49:22.875222273Z" level=info msg="Container 0428a200e98414d6552950082c36f6076d046186932f1a09de20d4ab6e1e02a0: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:49:22.885601 containerd[1537]: time="2025-11-23T22:49:22.885548779Z" level=info msg="CreateContainer within sandbox \"3aa77adaf4cbe80af7028a339004e60a455e4d5f3edca3f8827defd8c1137069\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0428a200e98414d6552950082c36f6076d046186932f1a09de20d4ab6e1e02a0\"" Nov 23 22:49:22.886352 containerd[1537]: time="2025-11-23T22:49:22.886319178Z" level=info msg="StartContainer for \"0428a200e98414d6552950082c36f6076d046186932f1a09de20d4ab6e1e02a0\"" Nov 23 22:49:22.888393 containerd[1537]: time="2025-11-23T22:49:22.888362215Z" level=info msg="connecting to shim 0428a200e98414d6552950082c36f6076d046186932f1a09de20d4ab6e1e02a0" address="unix:///run/containerd/s/9cd280fa9a31558abf94d6b596fe34c461863dc8bc152a1bff5e836205ac0ce0" protocol=ttrpc version=3 Nov 23 22:49:22.925743 systemd[1]: Started cri-containerd-0428a200e98414d6552950082c36f6076d046186932f1a09de20d4ab6e1e02a0.scope - libcontainer container 0428a200e98414d6552950082c36f6076d046186932f1a09de20d4ab6e1e02a0. Nov 23 22:49:23.018056 containerd[1537]: time="2025-11-23T22:49:23.017975922Z" level=info msg="StartContainer for \"0428a200e98414d6552950082c36f6076d046186932f1a09de20d4ab6e1e02a0\" returns successfully" Nov 23 22:49:23.157298 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 22:49:23.157811 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 22:49:23.409796 kubelet[2681]: I1123 22:49:23.409571 2681 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-ca-bundle\") pod \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\" (UID: \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\") " Nov 23 22:49:23.410484 kubelet[2681]: I1123 22:49:23.409917 2681 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-backend-key-pair\") pod \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\" (UID: \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\") " Nov 23 22:49:23.410484 kubelet[2681]: I1123 22:49:23.409942 2681 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvc4d\" (UniqueName: \"kubernetes.io/projected/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-kube-api-access-rvc4d\") pod \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\" (UID: \"39d3d1e2-118e-4d70-b8bc-8d53aea2707c\") " Nov 23 22:49:23.420006 kubelet[2681]: I1123 22:49:23.419954 2681 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "39d3d1e2-118e-4d70-b8bc-8d53aea2707c" (UID: "39d3d1e2-118e-4d70-b8bc-8d53aea2707c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 22:49:23.420703 kubelet[2681]: I1123 22:49:23.420675 2681 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-kube-api-access-rvc4d" (OuterVolumeSpecName: "kube-api-access-rvc4d") pod "39d3d1e2-118e-4d70-b8bc-8d53aea2707c" (UID: "39d3d1e2-118e-4d70-b8bc-8d53aea2707c"). InnerVolumeSpecName "kube-api-access-rvc4d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 22:49:23.420857 kubelet[2681]: I1123 22:49:23.420822 2681 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "39d3d1e2-118e-4d70-b8bc-8d53aea2707c" (UID: "39d3d1e2-118e-4d70-b8bc-8d53aea2707c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 22:49:23.511706 kubelet[2681]: I1123 22:49:23.511668 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 23 22:49:23.511706 kubelet[2681]: I1123 22:49:23.511700 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rvc4d\" (UniqueName: \"kubernetes.io/projected/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-kube-api-access-rvc4d\") on node \"localhost\" DevicePath \"\"" Nov 23 22:49:23.511706 kubelet[2681]: I1123 22:49:23.511708 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39d3d1e2-118e-4d70-b8bc-8d53aea2707c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 23 22:49:23.516329 systemd[1]: Removed slice kubepods-besteffort-pod39d3d1e2_118e_4d70_b8bc_8d53aea2707c.slice - libcontainer container kubepods-besteffort-pod39d3d1e2_118e_4d70_b8bc_8d53aea2707c.slice. Nov 23 22:49:23.524114 systemd[1]: var-lib-kubelet-pods-39d3d1e2\x2d118e\x2d4d70\x2db8bc\x2d8d53aea2707c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drvc4d.mount: Deactivated successfully. Nov 23 22:49:23.524206 systemd[1]: var-lib-kubelet-pods-39d3d1e2\x2d118e\x2d4d70\x2db8bc\x2d8d53aea2707c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 22:49:23.589700 kubelet[2681]: I1123 22:49:23.589638 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fdk2p" podStartSLOduration=1.521001321 podStartE2EDuration="11.586219243s" podCreationTimestamp="2025-11-23 22:49:12 +0000 UTC" firstStartedPulling="2025-11-23 22:49:12.716733557 +0000 UTC m=+22.455187132" lastFinishedPulling="2025-11-23 22:49:22.781951479 +0000 UTC m=+32.520405054" observedRunningTime="2025-11-23 22:49:23.538752023 +0000 UTC m=+33.277205598" watchObservedRunningTime="2025-11-23 22:49:23.586219243 +0000 UTC m=+33.324672818" Nov 23 22:49:23.632559 systemd[1]: Created slice kubepods-besteffort-pod47c39918_1558_4b3b_ba95_44f9fa641dd2.slice - libcontainer container kubepods-besteffort-pod47c39918_1558_4b3b_ba95_44f9fa641dd2.slice. Nov 23 22:49:23.713993 kubelet[2681]: I1123 22:49:23.713405 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47c39918-1558-4b3b-ba95-44f9fa641dd2-whisker-ca-bundle\") pod \"whisker-c67458676-t5v8h\" (UID: \"47c39918-1558-4b3b-ba95-44f9fa641dd2\") " pod="calico-system/whisker-c67458676-t5v8h" Nov 23 22:49:23.713993 kubelet[2681]: I1123 22:49:23.713465 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47c39918-1558-4b3b-ba95-44f9fa641dd2-whisker-backend-key-pair\") pod \"whisker-c67458676-t5v8h\" (UID: \"47c39918-1558-4b3b-ba95-44f9fa641dd2\") " pod="calico-system/whisker-c67458676-t5v8h" Nov 23 22:49:23.713993 kubelet[2681]: I1123 22:49:23.713498 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctf6g\" (UniqueName: \"kubernetes.io/projected/47c39918-1558-4b3b-ba95-44f9fa641dd2-kube-api-access-ctf6g\") pod \"whisker-c67458676-t5v8h\" (UID: \"47c39918-1558-4b3b-ba95-44f9fa641dd2\") " pod="calico-system/whisker-c67458676-t5v8h" Nov 23 22:49:23.938838 containerd[1537]: time="2025-11-23T22:49:23.938786117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c67458676-t5v8h,Uid:47c39918-1558-4b3b-ba95-44f9fa641dd2,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:24.123112 systemd-networkd[1438]: cali1b06b70de05: Link UP Nov 23 22:49:24.123867 systemd-networkd[1438]: cali1b06b70de05: Gained carrier Nov 23 22:49:24.139378 containerd[1537]: 2025-11-23 22:49:23.964 [INFO][3821] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:24.139378 containerd[1537]: 2025-11-23 22:49:24.004 [INFO][3821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--c67458676--t5v8h-eth0 whisker-c67458676- calico-system 47c39918-1558-4b3b-ba95-44f9fa641dd2 866 0 2025-11-23 22:49:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c67458676 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-c67458676-t5v8h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1b06b70de05 [] [] }} ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-" Nov 23 22:49:24.139378 containerd[1537]: 2025-11-23 22:49:24.004 [INFO][3821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-eth0" Nov 23 22:49:24.139378 containerd[1537]: 2025-11-23 22:49:24.071 [INFO][3836] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" HandleID="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Workload="localhost-k8s-whisker--c67458676--t5v8h-eth0" Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.071 [INFO][3836] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" HandleID="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Workload="localhost-k8s-whisker--c67458676--t5v8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000117b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-c67458676-t5v8h", "timestamp":"2025-11-23 22:49:24.071079075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.071 [INFO][3836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.071 [INFO][3836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.071 [INFO][3836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.082 [INFO][3836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" host="localhost" Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.090 [INFO][3836] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.095 [INFO][3836] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.097 [INFO][3836] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.099 [INFO][3836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:24.139661 containerd[1537]: 2025-11-23 22:49:24.100 [INFO][3836] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" host="localhost" Nov 23 22:49:24.139855 containerd[1537]: 2025-11-23 22:49:24.101 [INFO][3836] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97 Nov 23 22:49:24.139855 containerd[1537]: 2025-11-23 22:49:24.107 [INFO][3836] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" host="localhost" Nov 23 22:49:24.139855 containerd[1537]: 2025-11-23 22:49:24.112 [INFO][3836] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" host="localhost" Nov 23 22:49:24.139855 containerd[1537]: 2025-11-23 22:49:24.112 [INFO][3836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" host="localhost" Nov 23 22:49:24.139855 containerd[1537]: 2025-11-23 22:49:24.112 [INFO][3836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:24.139855 containerd[1537]: 2025-11-23 22:49:24.112 [INFO][3836] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" HandleID="k8s-pod-network.7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Workload="localhost-k8s-whisker--c67458676--t5v8h-eth0" Nov 23 22:49:24.139959 containerd[1537]: 2025-11-23 22:49:24.115 [INFO][3821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c67458676--t5v8h-eth0", GenerateName:"whisker-c67458676-", Namespace:"calico-system", SelfLink:"", UID:"47c39918-1558-4b3b-ba95-44f9fa641dd2", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c67458676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-c67458676-t5v8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b06b70de05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:24.139959 containerd[1537]: 2025-11-23 22:49:24.115 [INFO][3821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-eth0" Nov 23 22:49:24.140025 containerd[1537]: 2025-11-23 22:49:24.115 [INFO][3821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b06b70de05 ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-eth0" Nov 23 22:49:24.140025 containerd[1537]: 2025-11-23 22:49:24.124 [INFO][3821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-eth0" Nov 23 22:49:24.140077 containerd[1537]: 2025-11-23 22:49:24.124 [INFO][3821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c67458676--t5v8h-eth0", GenerateName:"whisker-c67458676-", Namespace:"calico-system", SelfLink:"", UID:"47c39918-1558-4b3b-ba95-44f9fa641dd2", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c67458676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97", Pod:"whisker-c67458676-t5v8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b06b70de05", MAC:"56:81:8b:e6:93:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:24.140129 containerd[1537]: 2025-11-23 22:49:24.137 [INFO][3821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" Namespace="calico-system" Pod="whisker-c67458676-t5v8h" WorkloadEndpoint="localhost-k8s-whisker--c67458676--t5v8h-eth0" Nov 23 22:49:24.193769 containerd[1537]: time="2025-11-23T22:49:24.193699369Z" level=info msg="connecting to shim 7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97" address="unix:///run/containerd/s/7cada22d19a1158e50c0aaf36af4466054d2be01da9dbb7de1e7a804320f8f8d" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:24.232717 systemd[1]: Started cri-containerd-7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97.scope - libcontainer container 7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97. Nov 23 22:49:24.246340 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:24.270579 containerd[1537]: time="2025-11-23T22:49:24.270535838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c67458676-t5v8h,Uid:47c39918-1558-4b3b-ba95-44f9fa641dd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c55aa2dc8592e7a00c915725e5a9689744b732cdbbe24ea0d2a0362e6a99d97\"" Nov 23 22:49:24.272543 containerd[1537]: time="2025-11-23T22:49:24.272455316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:49:24.372485 kubelet[2681]: I1123 22:49:24.372428 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d3d1e2-118e-4d70-b8bc-8d53aea2707c" path="/var/lib/kubelet/pods/39d3d1e2-118e-4d70-b8bc-8d53aea2707c/volumes" Nov 23 22:49:24.479836 containerd[1537]: time="2025-11-23T22:49:24.479706630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:24.481317 containerd[1537]: time="2025-11-23T22:49:24.481259908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:49:24.481317 containerd[1537]: time="2025-11-23T22:49:24.481301348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:49:24.481534 kubelet[2681]: E1123 22:49:24.481476 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:49:24.482542 kubelet[2681]: E1123 22:49:24.482477 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:49:24.487004 kubelet[2681]: E1123 22:49:24.486943 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c67458676-t5v8h_calico-system(47c39918-1558-4b3b-ba95-44f9fa641dd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:24.500974 containerd[1537]: time="2025-11-23T22:49:24.500933965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:49:24.514461 kubelet[2681]: I1123 22:49:24.514409 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 22:49:24.713000 containerd[1537]: time="2025-11-23T22:49:24.712942753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:24.721638 containerd[1537]: time="2025-11-23T22:49:24.721564743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:49:24.721870 containerd[1537]: time="2025-11-23T22:49:24.721686223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:49:24.722602 kubelet[2681]: E1123 22:49:24.722132 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:49:24.722602 kubelet[2681]: E1123 22:49:24.722197 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:49:24.722602 kubelet[2681]: E1123 22:49:24.722263 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c67458676-t5v8h_calico-system(47c39918-1558-4b3b-ba95-44f9fa641dd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:24.722848 kubelet[2681]: E1123 22:49:24.722301 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c67458676-t5v8h" podUID="47c39918-1558-4b3b-ba95-44f9fa641dd2" Nov 23 22:49:25.519338 kubelet[2681]: E1123 22:49:25.519181 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c67458676-t5v8h" podUID="47c39918-1558-4b3b-ba95-44f9fa641dd2" Nov 23 22:49:25.623671 systemd-networkd[1438]: cali1b06b70de05: Gained IPv6LL Nov 23 22:49:30.371765 containerd[1537]: time="2025-11-23T22:49:30.371664684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7b67686c-c7z45,Uid:927e7ec4-bea4-44ce-a267-dd04bc352b11,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:30.375940 containerd[1537]: time="2025-11-23T22:49:30.375891761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-msstk,Uid:2bae50f8-5c67-4c93-8d30-44e790923f61,Namespace:kube-system,Attempt:0,}" Nov 23 22:49:30.553231 systemd-networkd[1438]: cali2cc17e73141: Link UP Nov 23 22:49:30.553435 systemd-networkd[1438]: cali2cc17e73141: Gained carrier Nov 23 22:49:30.569591 containerd[1537]: 2025-11-23 22:49:30.440 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:30.569591 containerd[1537]: 2025-11-23 22:49:30.460 [INFO][4133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0 calico-kube-controllers-f7b67686c- calico-system 927e7ec4-bea4-44ce-a267-dd04bc352b11 808 0 2025-11-23 22:49:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f7b67686c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f7b67686c-c7z45 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2cc17e73141 [] [] }} ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-" Nov 23 22:49:30.569591 containerd[1537]: 2025-11-23 22:49:30.460 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" Nov 23 22:49:30.569591 containerd[1537]: 2025-11-23 22:49:30.491 [INFO][4154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" HandleID="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Workload="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.492 [INFO][4154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" HandleID="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Workload="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001376e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f7b67686c-c7z45", "timestamp":"2025-11-23 22:49:30.491912867 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.492 [INFO][4154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.492 [INFO][4154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.492 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.507 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" host="localhost" Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.516 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.524 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.527 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.531 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:30.569821 containerd[1537]: 2025-11-23 22:49:30.531 [INFO][4154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" host="localhost" Nov 23 22:49:30.570037 containerd[1537]: 2025-11-23 22:49:30.534 [INFO][4154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0 Nov 23 22:49:30.570037 containerd[1537]: 2025-11-23 22:49:30.538 [INFO][4154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" host="localhost" Nov 23 22:49:30.570037 containerd[1537]: 2025-11-23 22:49:30.545 [INFO][4154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" host="localhost" Nov 23 22:49:30.570037 containerd[1537]: 2025-11-23 22:49:30.545 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" host="localhost" Nov 23 22:49:30.570037 containerd[1537]: 2025-11-23 22:49:30.545 [INFO][4154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:30.570037 containerd[1537]: 2025-11-23 22:49:30.545 [INFO][4154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" HandleID="k8s-pod-network.98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Workload="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" Nov 23 22:49:30.570148 containerd[1537]: 2025-11-23 22:49:30.548 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0", GenerateName:"calico-kube-controllers-f7b67686c-", Namespace:"calico-system", SelfLink:"", UID:"927e7ec4-bea4-44ce-a267-dd04bc352b11", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7b67686c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f7b67686c-c7z45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cc17e73141", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:30.570192 containerd[1537]: 2025-11-23 22:49:30.548 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" Nov 23 22:49:30.570192 containerd[1537]: 2025-11-23 22:49:30.548 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cc17e73141 ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" Nov 23 22:49:30.570192 containerd[1537]: 2025-11-23 22:49:30.553 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" Nov 23 22:49:30.570250 containerd[1537]: 2025-11-23 22:49:30.553 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0", GenerateName:"calico-kube-controllers-f7b67686c-", Namespace:"calico-system", SelfLink:"", UID:"927e7ec4-bea4-44ce-a267-dd04bc352b11", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f7b67686c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0", Pod:"calico-kube-controllers-f7b67686c-c7z45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cc17e73141", MAC:"9e:79:8f:a1:37:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:30.570297 containerd[1537]: 2025-11-23 22:49:30.567 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" Namespace="calico-system" Pod="calico-kube-controllers-f7b67686c-c7z45" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f7b67686c--c7z45-eth0" Nov 23 22:49:30.596548 containerd[1537]: time="2025-11-23T22:49:30.595880823Z" level=info msg="connecting to shim 98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0" address="unix:///run/containerd/s/5d7d09c5158acbf609b0ad3944680e2e3086d5c8885438e16ae4b1043a84b061" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:30.627006 systemd[1]: Started cri-containerd-98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0.scope - libcontainer container 98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0. Nov 23 22:49:30.658732 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:30.661684 systemd-networkd[1438]: calic0420bc7b07: Link UP Nov 23 22:49:30.661893 systemd-networkd[1438]: calic0420bc7b07: Gained carrier Nov 23 22:49:30.680821 containerd[1537]: 2025-11-23 22:49:30.456 [INFO][4139] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:30.680821 containerd[1537]: 2025-11-23 22:49:30.478 [INFO][4139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--msstk-eth0 coredns-66bc5c9577- kube-system 2bae50f8-5c67-4c93-8d30-44e790923f61 806 0 2025-11-23 22:48:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-msstk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0420bc7b07 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-" Nov 23 22:49:30.680821 containerd[1537]: 2025-11-23 22:49:30.478 [INFO][4139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-eth0" Nov 23 22:49:30.680821 containerd[1537]: 2025-11-23 22:49:30.521 [INFO][4162] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" HandleID="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Workload="localhost-k8s-coredns--66bc5c9577--msstk-eth0" Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.521 [INFO][4162] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" HandleID="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Workload="localhost-k8s-coredns--66bc5c9577--msstk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-msstk", "timestamp":"2025-11-23 22:49:30.521493083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.521 [INFO][4162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.545 [INFO][4162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.545 [INFO][4162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.607 [INFO][4162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" host="localhost" Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.617 [INFO][4162] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.624 [INFO][4162] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.626 [INFO][4162] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.636 [INFO][4162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:30.681151 containerd[1537]: 2025-11-23 22:49:30.636 [INFO][4162] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" host="localhost" Nov 23 22:49:30.681404 containerd[1537]: 2025-11-23 22:49:30.639 [INFO][4162] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782 Nov 23 22:49:30.681404 containerd[1537]: 2025-11-23 22:49:30.644 [INFO][4162] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" host="localhost" Nov 23 22:49:30.681404 containerd[1537]: 2025-11-23 22:49:30.654 [INFO][4162] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" host="localhost" Nov 23 22:49:30.681404 containerd[1537]: 2025-11-23 22:49:30.654 [INFO][4162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" host="localhost" Nov 23 22:49:30.681404 containerd[1537]: 2025-11-23 22:49:30.654 [INFO][4162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:30.681404 containerd[1537]: 2025-11-23 22:49:30.654 [INFO][4162] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" HandleID="k8s-pod-network.234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Workload="localhost-k8s-coredns--66bc5c9577--msstk-eth0" Nov 23 22:49:30.681611 containerd[1537]: 2025-11-23 22:49:30.659 [INFO][4139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--msstk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2bae50f8-5c67-4c93-8d30-44e790923f61", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 48, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-msstk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0420bc7b07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:30.681611 containerd[1537]: 2025-11-23 22:49:30.659 [INFO][4139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-eth0" Nov 23 22:49:30.681611 containerd[1537]: 2025-11-23 22:49:30.659 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0420bc7b07 ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-eth0" Nov 23 22:49:30.681611 containerd[1537]: 2025-11-23 22:49:30.662 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-eth0" Nov 23 22:49:30.681611 containerd[1537]: 2025-11-23 22:49:30.662 [INFO][4139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--msstk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2bae50f8-5c67-4c93-8d30-44e790923f61", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 48, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782", Pod:"coredns-66bc5c9577-msstk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0420bc7b07", MAC:"b2:71:b1:0f:08:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:30.681611 containerd[1537]: 2025-11-23 22:49:30.675 [INFO][4139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" Namespace="kube-system" Pod="coredns-66bc5c9577-msstk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--msstk-eth0" Nov 23 22:49:30.691504 containerd[1537]: time="2025-11-23T22:49:30.691451146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f7b67686c-c7z45,Uid:927e7ec4-bea4-44ce-a267-dd04bc352b11,Namespace:calico-system,Attempt:0,} returns sandbox id \"98a23be4e468ca79942465bb14074bb5980622a287d206f1b9a7a58779dd7ee0\"" Nov 23 22:49:30.695639 containerd[1537]: time="2025-11-23T22:49:30.695217343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:49:30.832989 containerd[1537]: time="2025-11-23T22:49:30.832917112Z" level=info msg="connecting to shim 234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782" address="unix:///run/containerd/s/8bcef07436f6ecfa82df49c5386bf775f0fa4289df70adc122b9d12404468c2c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:30.872763 systemd[1]: Started cri-containerd-234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782.scope - libcontainer container 234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782. Nov 23 22:49:30.887650 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:30.904043 containerd[1537]: time="2025-11-23T22:49:30.903853935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:30.905181 containerd[1537]: time="2025-11-23T22:49:30.905135134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:49:30.905374 containerd[1537]: time="2025-11-23T22:49:30.905184374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:49:30.906186 kubelet[2681]: E1123 22:49:30.906146 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:49:30.906719 kubelet[2681]: E1123 22:49:30.906539 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:49:30.906719 kubelet[2681]: E1123 22:49:30.906639 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f7b67686c-c7z45_calico-system(927e7ec4-bea4-44ce-a267-dd04bc352b11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:30.906719 kubelet[2681]: E1123 22:49:30.906678 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" podUID="927e7ec4-bea4-44ce-a267-dd04bc352b11" Nov 23 22:49:30.919000 containerd[1537]: time="2025-11-23T22:49:30.918960443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-msstk,Uid:2bae50f8-5c67-4c93-8d30-44e790923f61,Namespace:kube-system,Attempt:0,} returns sandbox id \"234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782\"" Nov 23 22:49:30.925402 containerd[1537]: time="2025-11-23T22:49:30.925363518Z" level=info msg="CreateContainer within sandbox \"234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:49:30.939197 containerd[1537]: time="2025-11-23T22:49:30.939132667Z" level=info msg="Container abf673f2c895baa646d8b85c8f0a7cfaac5c412b269bbb055d2b48dcb5a26fb7: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:49:31.199686 containerd[1537]: time="2025-11-23T22:49:31.199272267Z" level=info msg="CreateContainer within sandbox \"234aae9bd14f083e3bd6c82345a726fbb7a0f3ae15ca948bb3f91c9301c26782\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abf673f2c895baa646d8b85c8f0a7cfaac5c412b269bbb055d2b48dcb5a26fb7\"" Nov 23 22:49:31.200298 containerd[1537]: time="2025-11-23T22:49:31.200236106Z" level=info msg="StartContainer for \"abf673f2c895baa646d8b85c8f0a7cfaac5c412b269bbb055d2b48dcb5a26fb7\"" Nov 23 22:49:31.201642 containerd[1537]: time="2025-11-23T22:49:31.201600145Z" level=info msg="connecting to shim abf673f2c895baa646d8b85c8f0a7cfaac5c412b269bbb055d2b48dcb5a26fb7" address="unix:///run/containerd/s/8bcef07436f6ecfa82df49c5386bf775f0fa4289df70adc122b9d12404468c2c" protocol=ttrpc version=3 Nov 23 22:49:31.235700 systemd[1]: Started cri-containerd-abf673f2c895baa646d8b85c8f0a7cfaac5c412b269bbb055d2b48dcb5a26fb7.scope - libcontainer container abf673f2c895baa646d8b85c8f0a7cfaac5c412b269bbb055d2b48dcb5a26fb7. Nov 23 22:49:31.313587 containerd[1537]: time="2025-11-23T22:49:31.313477181Z" level=info msg="StartContainer for \"abf673f2c895baa646d8b85c8f0a7cfaac5c412b269bbb055d2b48dcb5a26fb7\" returns successfully" Nov 23 22:49:31.319568 systemd[1]: Started sshd@7-10.0.0.9:22-10.0.0.1:48120.service - OpenSSH per-connection server daemon (10.0.0.1:48120). Nov 23 22:49:31.379595 containerd[1537]: time="2025-11-23T22:49:31.379068291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-gcdhb,Uid:02c22706-9c4d-4a45-b31f-a84083423193,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:49:31.397034 containerd[1537]: time="2025-11-23T22:49:31.396987878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-rjhsm,Uid:9b3997f9-79ca-4cb6-accf-cb8679793167,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:49:31.415714 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 48120 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:31.417310 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:31.430824 systemd-logind[1512]: New session 8 of user core. Nov 23 22:49:31.440795 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 22:49:31.537956 kubelet[2681]: E1123 22:49:31.537314 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" podUID="927e7ec4-bea4-44ce-a267-dd04bc352b11" Nov 23 22:49:31.577073 systemd-networkd[1438]: cali6a48bf3f6f2: Link UP Nov 23 22:49:31.579744 systemd-networkd[1438]: cali6a48bf3f6f2: Gained carrier Nov 23 22:49:31.610630 kubelet[2681]: I1123 22:49:31.610569 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-msstk" podStartSLOduration=35.610549677 podStartE2EDuration="35.610549677s" podCreationTimestamp="2025-11-23 22:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:49:31.601615123 +0000 UTC m=+41.340068738" watchObservedRunningTime="2025-11-23 22:49:31.610549677 +0000 UTC m=+41.349003212" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.431 [INFO][4341] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.452 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0 calico-apiserver-5f859fc9fb- calico-apiserver 02c22706-9c4d-4a45-b31f-a84083423193 804 0 2025-11-23 22:49:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f859fc9fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f859fc9fb-gcdhb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a48bf3f6f2 [] [] }} ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.452 [INFO][4341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.494 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" HandleID="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Workload="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.494 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" HandleID="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Workload="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f859fc9fb-gcdhb", "timestamp":"2025-11-23 22:49:31.494290004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.494 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.494 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.494 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.505 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.511 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.518 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.523 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.528 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.529 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.533 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54 Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.544 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.566 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.567 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" host="localhost" Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.567 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:31.616114 containerd[1537]: 2025-11-23 22:49:31.567 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" HandleID="k8s-pod-network.2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Workload="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" Nov 23 22:49:31.616695 containerd[1537]: 2025-11-23 22:49:31.571 [INFO][4341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0", GenerateName:"calico-apiserver-5f859fc9fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"02c22706-9c4d-4a45-b31f-a84083423193", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f859fc9fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f859fc9fb-gcdhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a48bf3f6f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:31.616695 containerd[1537]: 2025-11-23 22:49:31.571 [INFO][4341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" Nov 23 22:49:31.616695 containerd[1537]: 2025-11-23 22:49:31.571 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a48bf3f6f2 ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" Nov 23 22:49:31.616695 containerd[1537]: 2025-11-23 22:49:31.581 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" Nov 23 22:49:31.616695 containerd[1537]: 2025-11-23 22:49:31.581 [INFO][4341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0", GenerateName:"calico-apiserver-5f859fc9fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"02c22706-9c4d-4a45-b31f-a84083423193", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f859fc9fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54", Pod:"calico-apiserver-5f859fc9fb-gcdhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a48bf3f6f2", MAC:"b6:5b:44:83:6d:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:31.616695 containerd[1537]: 2025-11-23 22:49:31.611 [INFO][4341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-gcdhb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--gcdhb-eth0" Nov 23 22:49:31.663131 sshd[4363]: Connection closed by 10.0.0.1 port 48120 Nov 23 22:49:31.663679 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:31.672476 containerd[1537]: time="2025-11-23T22:49:31.672338670Z" level=info msg="connecting to shim 2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54" address="unix:///run/containerd/s/f59dd8e72be729b9d434c93a8e47b081be4f6ceca89c6b4ccfccd65801bcf6d7" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:31.672889 systemd-networkd[1438]: cali25a8b9ac5d1: Link UP Nov 23 22:49:31.675006 systemd[1]: sshd@7-10.0.0.9:22-10.0.0.1:48120.service: Deactivated successfully. Nov 23 22:49:31.680170 systemd-networkd[1438]: cali25a8b9ac5d1: Gained carrier Nov 23 22:49:31.688636 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 22:49:31.694462 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. Nov 23 22:49:31.698586 systemd-logind[1512]: Removed session 8. Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.482 [INFO][4356] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.501 [INFO][4356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0 calico-apiserver-5f859fc9fb- calico-apiserver 9b3997f9-79ca-4cb6-accf-cb8679793167 805 0 2025-11-23 22:49:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f859fc9fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f859fc9fb-rjhsm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25a8b9ac5d1 [] [] }} ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.502 [INFO][4356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.538 [INFO][4389] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" HandleID="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Workload="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.539 [INFO][4389] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" HandleID="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Workload="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f859fc9fb-rjhsm", "timestamp":"2025-11-23 22:49:31.538894371 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.539 [INFO][4389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.567 [INFO][4389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.567 [INFO][4389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.605 [INFO][4389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.621 [INFO][4389] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.634 [INFO][4389] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.638 [INFO][4389] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.642 [INFO][4389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.643 [INFO][4389] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.645 [INFO][4389] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810 Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.652 [INFO][4389] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.665 [INFO][4389] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.665 [INFO][4389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" host="localhost" Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.665 [INFO][4389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:31.708008 containerd[1537]: 2025-11-23 22:49:31.665 [INFO][4389] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" HandleID="k8s-pod-network.b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Workload="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" Nov 23 22:49:31.708818 containerd[1537]: 2025-11-23 22:49:31.669 [INFO][4356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0", GenerateName:"calico-apiserver-5f859fc9fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b3997f9-79ca-4cb6-accf-cb8679793167", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f859fc9fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f859fc9fb-rjhsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a8b9ac5d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:31.708818 containerd[1537]: 2025-11-23 22:49:31.669 [INFO][4356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" Nov 23 22:49:31.708818 containerd[1537]: 2025-11-23 22:49:31.669 [INFO][4356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25a8b9ac5d1 ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" Nov 23 22:49:31.708818 containerd[1537]: 2025-11-23 22:49:31.676 [INFO][4356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" Nov 23 22:49:31.708818 containerd[1537]: 2025-11-23 22:49:31.685 [INFO][4356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0", GenerateName:"calico-apiserver-5f859fc9fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b3997f9-79ca-4cb6-accf-cb8679793167", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f859fc9fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810", Pod:"calico-apiserver-5f859fc9fb-rjhsm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a8b9ac5d1", MAC:"d2:ac:e2:d9:24:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:31.708818 containerd[1537]: 2025-11-23 22:49:31.705 [INFO][4356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" Namespace="calico-apiserver" Pod="calico-apiserver-5f859fc9fb-rjhsm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f859fc9fb--rjhsm-eth0" Nov 23 22:49:31.710781 systemd[1]: Started cri-containerd-2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54.scope - libcontainer container 2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54. Nov 23 22:49:31.726160 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:31.740395 containerd[1537]: time="2025-11-23T22:49:31.740345219Z" level=info msg="connecting to shim b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810" address="unix:///run/containerd/s/6c1b2cd3aabcfad13b2f985d96e9e8b09eea308f6ba2776b94271bf5a92e39d8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:31.750384 containerd[1537]: time="2025-11-23T22:49:31.750335811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-gcdhb,Uid:02c22706-9c4d-4a45-b31f-a84083423193,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2361e06f59e15dea59f77aeda361baf2716bd0a6346ccae7343e66fe99469e54\"" Nov 23 22:49:31.752191 containerd[1537]: time="2025-11-23T22:49:31.752145450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:49:31.772789 systemd[1]: Started cri-containerd-b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810.scope - libcontainer container b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810. Nov 23 22:49:31.786323 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:31.817161 containerd[1537]: time="2025-11-23T22:49:31.817025081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f859fc9fb-rjhsm,Uid:9b3997f9-79ca-4cb6-accf-cb8679793167,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b155b537ff028b8f44782a2272797762f34675386dac4f569967a4312f3de810\"" Nov 23 22:49:31.958463 containerd[1537]: time="2025-11-23T22:49:31.958379734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:31.965052 containerd[1537]: time="2025-11-23T22:49:31.964901249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:49:31.965052 containerd[1537]: time="2025-11-23T22:49:31.964958849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:49:31.965186 kubelet[2681]: E1123 22:49:31.965143 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:31.965480 kubelet[2681]: E1123 22:49:31.965195 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:31.965480 kubelet[2681]: E1123 22:49:31.965363 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f859fc9fb-gcdhb_calico-apiserver(02c22706-9c4d-4a45-b31f-a84083423193): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:31.965480 kubelet[2681]: E1123 22:49:31.965399 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" podUID="02c22706-9c4d-4a45-b31f-a84083423193" Nov 23 22:49:31.965786 containerd[1537]: time="2025-11-23T22:49:31.965751608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:49:32.151652 systemd-networkd[1438]: cali2cc17e73141: Gained IPv6LL Nov 23 22:49:32.166643 containerd[1537]: time="2025-11-23T22:49:32.166584224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:32.168980 containerd[1537]: time="2025-11-23T22:49:32.168926063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:49:32.169101 containerd[1537]: time="2025-11-23T22:49:32.168948463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:49:32.169189 kubelet[2681]: E1123 22:49:32.169151 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:32.169234 kubelet[2681]: E1123 22:49:32.169199 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:32.169288 kubelet[2681]: E1123 22:49:32.169268 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f859fc9fb-rjhsm_calico-apiserver(9b3997f9-79ca-4cb6-accf-cb8679793167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:32.169328 kubelet[2681]: E1123 22:49:32.169306 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" podUID="9b3997f9-79ca-4cb6-accf-cb8679793167" Nov 23 22:49:32.374171 containerd[1537]: time="2025-11-23T22:49:32.374123278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rhv7d,Uid:68e110ca-ba29-43c4-bb8a-7769dd4f462e,Namespace:kube-system,Attempt:0,}" Nov 23 22:49:32.407683 systemd-networkd[1438]: calic0420bc7b07: Gained IPv6LL Nov 23 22:49:32.517624 systemd-networkd[1438]: cali5f0e12b45e2: Link UP Nov 23 22:49:32.517867 systemd-networkd[1438]: cali5f0e12b45e2: Gained carrier Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.400 [INFO][4541] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.417 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--rhv7d-eth0 coredns-66bc5c9577- kube-system 68e110ca-ba29-43c4-bb8a-7769dd4f462e 802 0 2025-11-23 22:48:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-rhv7d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f0e12b45e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.417 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.447 [INFO][4556] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" HandleID="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Workload="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.447 [INFO][4556] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" HandleID="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Workload="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-rhv7d", "timestamp":"2025-11-23 22:49:32.447262106 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.449 [INFO][4556] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.449 [INFO][4556] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.449 [INFO][4556] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.460 [INFO][4556] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.467 [INFO][4556] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.474 [INFO][4556] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.476 [INFO][4556] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.479 [INFO][4556] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.479 [INFO][4556] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.481 [INFO][4556] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684 Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.491 [INFO][4556] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.511 [INFO][4556] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.511 [INFO][4556] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" host="localhost" Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.511 [INFO][4556] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:32.532685 containerd[1537]: 2025-11-23 22:49:32.511 [INFO][4556] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" HandleID="k8s-pod-network.45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Workload="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" Nov 23 22:49:32.533840 containerd[1537]: 2025-11-23 22:49:32.514 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rhv7d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"68e110ca-ba29-43c4-bb8a-7769dd4f462e", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 48, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-rhv7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f0e12b45e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:32.533840 containerd[1537]: 2025-11-23 22:49:32.514 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" Nov 23 22:49:32.533840 containerd[1537]: 2025-11-23 22:49:32.514 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f0e12b45e2 ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" Nov 23 22:49:32.533840 containerd[1537]: 2025-11-23 22:49:32.517 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" Nov 23 22:49:32.533840 containerd[1537]: 2025-11-23 22:49:32.517 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rhv7d-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"68e110ca-ba29-43c4-bb8a-7769dd4f462e", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 48, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684", Pod:"coredns-66bc5c9577-rhv7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f0e12b45e2", MAC:"f2:5b:28:62:90:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:32.533840 containerd[1537]: 2025-11-23 22:49:32.529 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" Namespace="kube-system" Pod="coredns-66bc5c9577-rhv7d" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rhv7d-eth0" Nov 23 22:49:32.545696 kubelet[2681]: E1123 22:49:32.545596 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" podUID="02c22706-9c4d-4a45-b31f-a84083423193" Nov 23 22:49:32.549152 kubelet[2681]: E1123 22:49:32.549103 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" podUID="9b3997f9-79ca-4cb6-accf-cb8679793167" Nov 23 22:49:32.549551 kubelet[2681]: E1123 22:49:32.549276 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" podUID="927e7ec4-bea4-44ce-a267-dd04bc352b11" Nov 23 22:49:32.569377 containerd[1537]: time="2025-11-23T22:49:32.569331899Z" level=info msg="connecting to shim 45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684" address="unix:///run/containerd/s/bb0de75afd6d74e38230786f7a5e95bccb3a61de1893d8e3ecc52c992cdc598c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:32.616738 systemd[1]: Started cri-containerd-45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684.scope - libcontainer container 45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684. Nov 23 22:49:32.629340 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:32.653646 containerd[1537]: time="2025-11-23T22:49:32.653598680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rhv7d,Uid:68e110ca-ba29-43c4-bb8a-7769dd4f462e,Namespace:kube-system,Attempt:0,} returns sandbox id \"45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684\"" Nov 23 22:49:32.660606 containerd[1537]: time="2025-11-23T22:49:32.660477315Z" level=info msg="CreateContainer within sandbox \"45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:49:32.675332 containerd[1537]: time="2025-11-23T22:49:32.675285264Z" level=info msg="Container 13b039edec88787b3ab29f1eccd9936a376c61b8e0654cbee8cf573eb617faea: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:49:32.683647 containerd[1537]: time="2025-11-23T22:49:32.683602098Z" level=info msg="CreateContainer within sandbox \"45e37bf79639f56116526c88d752c141dd14702aa67c172e094a0c572fad0684\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13b039edec88787b3ab29f1eccd9936a376c61b8e0654cbee8cf573eb617faea\"" Nov 23 22:49:32.684362 containerd[1537]: time="2025-11-23T22:49:32.684313978Z" level=info msg="StartContainer for \"13b039edec88787b3ab29f1eccd9936a376c61b8e0654cbee8cf573eb617faea\"" Nov 23 22:49:32.685895 containerd[1537]: time="2025-11-23T22:49:32.685858897Z" level=info msg="connecting to shim 13b039edec88787b3ab29f1eccd9936a376c61b8e0654cbee8cf573eb617faea" address="unix:///run/containerd/s/bb0de75afd6d74e38230786f7a5e95bccb3a61de1893d8e3ecc52c992cdc598c" protocol=ttrpc version=3 Nov 23 22:49:32.709739 systemd[1]: Started cri-containerd-13b039edec88787b3ab29f1eccd9936a376c61b8e0654cbee8cf573eb617faea.scope - libcontainer container 13b039edec88787b3ab29f1eccd9936a376c61b8e0654cbee8cf573eb617faea. Nov 23 22:49:32.768071 containerd[1537]: time="2025-11-23T22:49:32.767961679Z" level=info msg="StartContainer for \"13b039edec88787b3ab29f1eccd9936a376c61b8e0654cbee8cf573eb617faea\" returns successfully" Nov 23 22:49:32.919768 systemd-networkd[1438]: cali6a48bf3f6f2: Gained IPv6LL Nov 23 22:49:33.175732 systemd-networkd[1438]: cali25a8b9ac5d1: Gained IPv6LL Nov 23 22:49:33.379263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962346135.mount: Deactivated successfully. Nov 23 22:49:33.552775 kubelet[2681]: E1123 22:49:33.552649 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" podUID="9b3997f9-79ca-4cb6-accf-cb8679793167" Nov 23 22:49:33.555479 kubelet[2681]: E1123 22:49:33.555159 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" podUID="02c22706-9c4d-4a45-b31f-a84083423193" Nov 23 22:49:33.567780 kubelet[2681]: I1123 22:49:33.567678 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rhv7d" podStartSLOduration=37.567659698 podStartE2EDuration="37.567659698s" podCreationTimestamp="2025-11-23 22:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:49:33.567552938 +0000 UTC m=+43.306006513" watchObservedRunningTime="2025-11-23 22:49:33.567659698 +0000 UTC m=+43.306113273" Nov 23 22:49:34.129776 kubelet[2681]: I1123 22:49:34.129646 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 22:49:34.327690 systemd-networkd[1438]: cali5f0e12b45e2: Gained IPv6LL Nov 23 22:49:34.370751 containerd[1537]: time="2025-11-23T22:49:34.370710700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gdnbl,Uid:12a478e1-2715-41a3-b494-6659c8d5a00c,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:34.375473 containerd[1537]: time="2025-11-23T22:49:34.375403257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r8zz,Uid:677210a8-7e3f-4eb7-b133-c4888088b528,Namespace:calico-system,Attempt:0,}" Nov 23 22:49:34.540074 systemd-networkd[1438]: cali46406bff9e6: Link UP Nov 23 22:49:34.540783 systemd-networkd[1438]: cali46406bff9e6: Gained carrier Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.422 [INFO][4748] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.443 [INFO][4748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--8r8zz-eth0 goldmane-7c778bb748- calico-system 677210a8-7e3f-4eb7-b133-c4888088b528 807 0 2025-11-23 22:49:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-8r8zz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali46406bff9e6 [] [] }} ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.443 [INFO][4748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.479 [INFO][4783] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" HandleID="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Workload="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.479 [INFO][4783] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" HandleID="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Workload="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-8r8zz", "timestamp":"2025-11-23 22:49:34.479326752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.479 [INFO][4783] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.479 [INFO][4783] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.479 [INFO][4783] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.496 [INFO][4783] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.502 [INFO][4783] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.515 [INFO][4783] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.517 [INFO][4783] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.520 [INFO][4783] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.520 [INFO][4783] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.521 [INFO][4783] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.528 [INFO][4783] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.534 [INFO][4783] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.534 [INFO][4783] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" host="localhost" Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.535 [INFO][4783] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:34.554881 containerd[1537]: 2025-11-23 22:49:34.535 [INFO][4783] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" HandleID="k8s-pod-network.5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Workload="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" Nov 23 22:49:34.555400 containerd[1537]: 2025-11-23 22:49:34.537 [INFO][4748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8r8zz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"677210a8-7e3f-4eb7-b133-c4888088b528", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-8r8zz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46406bff9e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:34.555400 containerd[1537]: 2025-11-23 22:49:34.537 [INFO][4748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" Nov 23 22:49:34.555400 containerd[1537]: 2025-11-23 22:49:34.537 [INFO][4748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46406bff9e6 ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" Nov 23 22:49:34.555400 containerd[1537]: 2025-11-23 22:49:34.540 [INFO][4748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" Nov 23 22:49:34.555400 containerd[1537]: 2025-11-23 22:49:34.540 [INFO][4748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--8r8zz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"677210a8-7e3f-4eb7-b133-c4888088b528", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be", Pod:"goldmane-7c778bb748-8r8zz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46406bff9e6", MAC:"ae:1e:dd:a6:af:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:34.555400 containerd[1537]: 2025-11-23 22:49:34.551 [INFO][4748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" Namespace="calico-system" Pod="goldmane-7c778bb748-8r8zz" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--8r8zz-eth0" Nov 23 22:49:34.588198 containerd[1537]: time="2025-11-23T22:49:34.588100724Z" level=info msg="connecting to shim 5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be" address="unix:///run/containerd/s/782cf3427edeca1d7769e3cbee5c47483a380bf98af1bbe3ba7bf708b22c7871" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:34.622763 systemd[1]: Started cri-containerd-5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be.scope - libcontainer container 5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be. Nov 23 22:49:34.639682 systemd-networkd[1438]: cali2c2f9e419a9: Link UP Nov 23 22:49:34.640361 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:34.640454 systemd-networkd[1438]: cali2c2f9e419a9: Gained carrier Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.426 [INFO][4747] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.446 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gdnbl-eth0 csi-node-driver- calico-system 12a478e1-2715-41a3-b494-6659c8d5a00c 706 0 2025-11-23 22:49:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gdnbl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2c2f9e419a9 [] [] }} ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.446 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-eth0" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.483 [INFO][4790] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" HandleID="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Workload="localhost-k8s-csi--node--driver--gdnbl-eth0" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.483 [INFO][4790] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" HandleID="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Workload="localhost-k8s-csi--node--driver--gdnbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gdnbl", "timestamp":"2025-11-23 22:49:34.483650709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.483 [INFO][4790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.535 [INFO][4790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.535 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.598 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.603 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.612 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.615 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.617 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.618 [INFO][4790] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.619 [INFO][4790] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8 Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.624 [INFO][4790] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.632 [INFO][4790] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.632 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" host="localhost" Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.632 [INFO][4790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:49:34.664678 containerd[1537]: 2025-11-23 22:49:34.632 [INFO][4790] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" HandleID="k8s-pod-network.bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Workload="localhost-k8s-csi--node--driver--gdnbl-eth0" Nov 23 22:49:34.665339 containerd[1537]: 2025-11-23 22:49:34.636 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gdnbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12a478e1-2715-41a3-b494-6659c8d5a00c", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gdnbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2f9e419a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:34.665339 containerd[1537]: 2025-11-23 22:49:34.637 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-eth0" Nov 23 22:49:34.665339 containerd[1537]: 2025-11-23 22:49:34.637 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c2f9e419a9 ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-eth0" Nov 23 22:49:34.665339 containerd[1537]: 2025-11-23 22:49:34.642 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-eth0" Nov 23 22:49:34.665339 containerd[1537]: 2025-11-23 22:49:34.643 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gdnbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12a478e1-2715-41a3-b494-6659c8d5a00c", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8", Pod:"csi-node-driver-gdnbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2c2f9e419a9", MAC:"36:63:74:e1:c0:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:49:34.665339 containerd[1537]: 2025-11-23 22:49:34.661 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" Namespace="calico-system" Pod="csi-node-driver-gdnbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gdnbl-eth0" Nov 23 22:49:34.671350 containerd[1537]: time="2025-11-23T22:49:34.671313073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8r8zz,Uid:677210a8-7e3f-4eb7-b133-c4888088b528,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d745085e9120c26e49e37ac3e881d4195d873a79fb9f21db317c60b7d9158be\"" Nov 23 22:49:34.673148 containerd[1537]: time="2025-11-23T22:49:34.673034032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:49:34.687416 containerd[1537]: time="2025-11-23T22:49:34.687362943Z" level=info msg="connecting to shim bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8" address="unix:///run/containerd/s/17528cfb06dad85ad1fb233dd94535c4c5fe710ae74af0928508daaa8899d3d9" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:49:34.712810 systemd[1]: Started cri-containerd-bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8.scope - libcontainer container bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8. Nov 23 22:49:34.723531 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 22:49:34.736938 containerd[1537]: time="2025-11-23T22:49:34.736902512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gdnbl,Uid:12a478e1-2715-41a3-b494-6659c8d5a00c,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb8ebc758a1b8f7315931a08b15b6e5825c059a7dda17b9c0222af3d0903fef8\"" Nov 23 22:49:34.882598 containerd[1537]: time="2025-11-23T22:49:34.882544581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:34.883833 containerd[1537]: time="2025-11-23T22:49:34.883766181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:49:34.883833 containerd[1537]: time="2025-11-23T22:49:34.883771820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:49:34.884258 kubelet[2681]: E1123 22:49:34.884167 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:49:34.884258 kubelet[2681]: E1123 22:49:34.884218 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:49:34.884645 kubelet[2681]: E1123 22:49:34.884556 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8r8zz_calico-system(677210a8-7e3f-4eb7-b133-c4888088b528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:34.884645 kubelet[2681]: E1123 22:49:34.884594 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r8zz" podUID="677210a8-7e3f-4eb7-b133-c4888088b528" Nov 23 22:49:34.884928 containerd[1537]: time="2025-11-23T22:49:34.884725900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:49:35.096984 containerd[1537]: time="2025-11-23T22:49:35.096918972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:35.098033 containerd[1537]: time="2025-11-23T22:49:35.097995891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:49:35.098123 containerd[1537]: time="2025-11-23T22:49:35.098076291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:49:35.098291 kubelet[2681]: E1123 22:49:35.098235 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:49:35.098291 kubelet[2681]: E1123 22:49:35.098285 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:49:35.098384 kubelet[2681]: E1123 22:49:35.098361 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gdnbl_calico-system(12a478e1-2715-41a3-b494-6659c8d5a00c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:35.099498 containerd[1537]: time="2025-11-23T22:49:35.099465050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:49:35.323857 containerd[1537]: time="2025-11-23T22:49:35.323725399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:35.325308 containerd[1537]: time="2025-11-23T22:49:35.325257358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:49:35.325380 containerd[1537]: time="2025-11-23T22:49:35.325299398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:49:35.325621 kubelet[2681]: E1123 22:49:35.325570 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:49:35.325621 kubelet[2681]: E1123 22:49:35.325619 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:49:35.325800 kubelet[2681]: E1123 22:49:35.325762 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gdnbl_calico-system(12a478e1-2715-41a3-b494-6659c8d5a00c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:35.325870 kubelet[2681]: E1123 22:49:35.325813 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:35.563915 kubelet[2681]: E1123 22:49:35.563846 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r8zz" podUID="677210a8-7e3f-4eb7-b133-c4888088b528" Nov 23 22:49:35.570307 kubelet[2681]: E1123 22:49:35.570203 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:35.863689 systemd-networkd[1438]: cali46406bff9e6: Gained IPv6LL Nov 23 22:49:36.209685 kubelet[2681]: I1123 22:49:36.209576 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 22:49:36.247640 systemd-networkd[1438]: cali2c2f9e419a9: Gained IPv6LL Nov 23 22:49:36.568498 kubelet[2681]: E1123 22:49:36.568349 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r8zz" podUID="677210a8-7e3f-4eb7-b133-c4888088b528" Nov 23 22:49:36.569577 kubelet[2681]: E1123 22:49:36.569491 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:36.678894 systemd[1]: Started sshd@8-10.0.0.9:22-10.0.0.1:48122.service - OpenSSH per-connection server daemon (10.0.0.1:48122). Nov 23 22:49:36.751888 sshd[4961]: Accepted publickey for core from 10.0.0.1 port 48122 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:36.754063 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:36.759580 systemd-logind[1512]: New session 9 of user core. Nov 23 22:49:36.768198 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 22:49:36.923764 sshd[4984]: Connection closed by 10.0.0.1 port 48122 Nov 23 22:49:36.924288 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:36.928062 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. Nov 23 22:49:36.928112 systemd[1]: sshd@8-10.0.0.9:22-10.0.0.1:48122.service: Deactivated successfully. Nov 23 22:49:36.930917 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 22:49:36.932211 systemd-logind[1512]: Removed session 9. Nov 23 22:49:37.368620 containerd[1537]: time="2025-11-23T22:49:37.368119549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:49:37.371853 systemd-networkd[1438]: vxlan.calico: Link UP Nov 23 22:49:37.371860 systemd-networkd[1438]: vxlan.calico: Gained carrier Nov 23 22:49:37.579261 containerd[1537]: time="2025-11-23T22:49:37.579203041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:37.679498 containerd[1537]: time="2025-11-23T22:49:37.679354669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:49:37.679498 containerd[1537]: time="2025-11-23T22:49:37.679457429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:49:37.679874 kubelet[2681]: E1123 22:49:37.679827 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:49:37.680157 kubelet[2681]: E1123 22:49:37.679875 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:49:37.680157 kubelet[2681]: E1123 22:49:37.680112 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c67458676-t5v8h_calico-system(47c39918-1558-4b3b-ba95-44f9fa641dd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:37.681931 containerd[1537]: time="2025-11-23T22:49:37.681848348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:49:37.885208 containerd[1537]: time="2025-11-23T22:49:37.885163964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:37.900464 containerd[1537]: time="2025-11-23T22:49:37.900312956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:49:37.900634 containerd[1537]: time="2025-11-23T22:49:37.900468236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:49:37.900742 kubelet[2681]: E1123 22:49:37.900681 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:49:37.900811 kubelet[2681]: E1123 22:49:37.900755 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:49:37.900857 kubelet[2681]: E1123 22:49:37.900836 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c67458676-t5v8h_calico-system(47c39918-1558-4b3b-ba95-44f9fa641dd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:37.900933 kubelet[2681]: E1123 22:49:37.900888 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c67458676-t5v8h" podUID="47c39918-1558-4b3b-ba95-44f9fa641dd2" Nov 23 22:49:38.423789 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Nov 23 22:49:41.942820 systemd[1]: Started sshd@9-10.0.0.9:22-10.0.0.1:39612.service - OpenSSH per-connection server daemon (10.0.0.1:39612). Nov 23 22:49:42.009138 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 39612 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:42.010732 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:42.015274 systemd-logind[1512]: New session 10 of user core. Nov 23 22:49:42.022689 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 22:49:42.186684 sshd[5110]: Connection closed by 10.0.0.1 port 39612 Nov 23 22:49:42.187475 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:42.197048 systemd[1]: sshd@9-10.0.0.9:22-10.0.0.1:39612.service: Deactivated successfully. Nov 23 22:49:42.199013 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 22:49:42.202724 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. Nov 23 22:49:42.205928 systemd[1]: Started sshd@10-10.0.0.9:22-10.0.0.1:39622.service - OpenSSH per-connection server daemon (10.0.0.1:39622). Nov 23 22:49:42.207817 systemd-logind[1512]: Removed session 10. Nov 23 22:49:42.268004 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 39622 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:42.269423 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:42.273465 systemd-logind[1512]: New session 11 of user core. Nov 23 22:49:42.286820 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 22:49:42.515558 sshd[5128]: Connection closed by 10.0.0.1 port 39622 Nov 23 22:49:42.515323 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:42.528071 systemd[1]: sshd@10-10.0.0.9:22-10.0.0.1:39622.service: Deactivated successfully. Nov 23 22:49:42.531111 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 22:49:42.535541 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. Nov 23 22:49:42.541122 systemd[1]: Started sshd@11-10.0.0.9:22-10.0.0.1:39636.service - OpenSSH per-connection server daemon (10.0.0.1:39636). Nov 23 22:49:42.542462 systemd-logind[1512]: Removed session 11. Nov 23 22:49:42.592628 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 39636 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:42.594053 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:42.598481 systemd-logind[1512]: New session 12 of user core. Nov 23 22:49:42.604716 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 22:49:42.736977 sshd[5147]: Connection closed by 10.0.0.1 port 39636 Nov 23 22:49:42.737584 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:42.741249 systemd[1]: sshd@11-10.0.0.9:22-10.0.0.1:39636.service: Deactivated successfully. Nov 23 22:49:42.743213 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 22:49:42.744205 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. Nov 23 22:49:42.745296 systemd-logind[1512]: Removed session 12. Nov 23 22:49:43.366561 containerd[1537]: time="2025-11-23T22:49:43.366471696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:49:43.574799 containerd[1537]: time="2025-11-23T22:49:43.574742743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:43.575726 containerd[1537]: time="2025-11-23T22:49:43.575671823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:49:43.575794 containerd[1537]: time="2025-11-23T22:49:43.575751223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:49:43.575981 kubelet[2681]: E1123 22:49:43.575920 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:49:43.575981 kubelet[2681]: E1123 22:49:43.575979 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:49:43.576338 kubelet[2681]: E1123 22:49:43.576057 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-f7b67686c-c7z45_calico-system(927e7ec4-bea4-44ce-a267-dd04bc352b11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:43.576338 kubelet[2681]: E1123 22:49:43.576090 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" podUID="927e7ec4-bea4-44ce-a267-dd04bc352b11" Nov 23 22:49:44.366904 containerd[1537]: time="2025-11-23T22:49:44.366854956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:49:44.619053 containerd[1537]: time="2025-11-23T22:49:44.618917913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:44.619990 containerd[1537]: time="2025-11-23T22:49:44.619950033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:49:44.620067 containerd[1537]: time="2025-11-23T22:49:44.620022113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:49:44.620201 kubelet[2681]: E1123 22:49:44.620157 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:44.620496 kubelet[2681]: E1123 22:49:44.620211 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:44.620496 kubelet[2681]: E1123 22:49:44.620297 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f859fc9fb-rjhsm_calico-apiserver(9b3997f9-79ca-4cb6-accf-cb8679793167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:44.620496 kubelet[2681]: E1123 22:49:44.620330 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" podUID="9b3997f9-79ca-4cb6-accf-cb8679793167" Nov 23 22:49:45.367034 containerd[1537]: time="2025-11-23T22:49:45.366960317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:49:45.588290 containerd[1537]: time="2025-11-23T22:49:45.588216169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:45.601643 containerd[1537]: time="2025-11-23T22:49:45.601480685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:49:45.601643 containerd[1537]: time="2025-11-23T22:49:45.601554365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:49:45.601941 kubelet[2681]: E1123 22:49:45.601881 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:45.601988 kubelet[2681]: E1123 22:49:45.601948 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:49:45.602059 kubelet[2681]: E1123 22:49:45.602039 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5f859fc9fb-gcdhb_calico-apiserver(02c22706-9c4d-4a45-b31f-a84083423193): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:45.602622 kubelet[2681]: E1123 22:49:45.602586 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" podUID="02c22706-9c4d-4a45-b31f-a84083423193" Nov 23 22:49:47.749543 systemd[1]: Started sshd@12-10.0.0.9:22-10.0.0.1:39650.service - OpenSSH per-connection server daemon (10.0.0.1:39650). Nov 23 22:49:47.798542 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 39650 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:47.799972 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:47.804136 systemd-logind[1512]: New session 13 of user core. Nov 23 22:49:47.812733 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 22:49:47.980476 sshd[5181]: Connection closed by 10.0.0.1 port 39650 Nov 23 22:49:47.981379 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:47.990351 systemd[1]: sshd@12-10.0.0.9:22-10.0.0.1:39650.service: Deactivated successfully. Nov 23 22:49:47.994033 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 22:49:47.995749 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. Nov 23 22:49:47.999879 systemd[1]: Started sshd@13-10.0.0.9:22-10.0.0.1:39660.service - OpenSSH per-connection server daemon (10.0.0.1:39660). Nov 23 22:49:48.000778 systemd-logind[1512]: Removed session 13. Nov 23 22:49:48.067427 sshd[5194]: Accepted publickey for core from 10.0.0.1 port 39660 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:48.069012 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:48.073712 systemd-logind[1512]: New session 14 of user core. Nov 23 22:49:48.087872 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 22:49:48.318346 sshd[5197]: Connection closed by 10.0.0.1 port 39660 Nov 23 22:49:48.318739 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:48.334068 systemd[1]: sshd@13-10.0.0.9:22-10.0.0.1:39660.service: Deactivated successfully. Nov 23 22:49:48.336328 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 22:49:48.337848 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. Nov 23 22:49:48.343728 systemd[1]: Started sshd@14-10.0.0.9:22-10.0.0.1:39674.service - OpenSSH per-connection server daemon (10.0.0.1:39674). Nov 23 22:49:48.344503 systemd-logind[1512]: Removed session 14. Nov 23 22:49:48.405089 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 39674 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:48.406637 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:48.412852 systemd-logind[1512]: New session 15 of user core. Nov 23 22:49:48.427782 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 22:49:49.028002 sshd[5212]: Connection closed by 10.0.0.1 port 39674 Nov 23 22:49:49.028746 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:49.037183 systemd[1]: sshd@14-10.0.0.9:22-10.0.0.1:39674.service: Deactivated successfully. Nov 23 22:49:49.040817 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 22:49:49.043246 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. Nov 23 22:49:49.048546 systemd[1]: Started sshd@15-10.0.0.9:22-10.0.0.1:39676.service - OpenSSH per-connection server daemon (10.0.0.1:39676). Nov 23 22:49:49.052679 systemd-logind[1512]: Removed session 15. Nov 23 22:49:49.105949 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 39676 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:49.108087 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:49.114525 systemd-logind[1512]: New session 16 of user core. Nov 23 22:49:49.122737 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 22:49:49.397333 sshd[5234]: Connection closed by 10.0.0.1 port 39676 Nov 23 22:49:49.398688 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:49.407793 systemd[1]: sshd@15-10.0.0.9:22-10.0.0.1:39676.service: Deactivated successfully. Nov 23 22:49:49.411211 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 22:49:49.412720 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. Nov 23 22:49:49.415543 systemd[1]: Started sshd@16-10.0.0.9:22-10.0.0.1:47606.service - OpenSSH per-connection server daemon (10.0.0.1:47606). Nov 23 22:49:49.417670 systemd-logind[1512]: Removed session 16. Nov 23 22:49:49.481011 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 47606 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:49.483049 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:49.488466 systemd-logind[1512]: New session 17 of user core. Nov 23 22:49:49.496727 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 22:49:49.632321 sshd[5248]: Connection closed by 10.0.0.1 port 47606 Nov 23 22:49:49.632154 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:49.636533 systemd[1]: sshd@16-10.0.0.9:22-10.0.0.1:47606.service: Deactivated successfully. Nov 23 22:49:49.639270 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 22:49:49.640279 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. Nov 23 22:49:49.641431 systemd-logind[1512]: Removed session 17. Nov 23 22:49:50.367056 containerd[1537]: time="2025-11-23T22:49:50.366948077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:49:50.598958 containerd[1537]: time="2025-11-23T22:49:50.598915546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:50.599973 containerd[1537]: time="2025-11-23T22:49:50.599927746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:49:50.600165 containerd[1537]: time="2025-11-23T22:49:50.600144826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:49:50.600384 kubelet[2681]: E1123 22:49:50.600323 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:49:50.600949 kubelet[2681]: E1123 22:49:50.600757 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:49:50.600949 kubelet[2681]: E1123 22:49:50.600867 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8r8zz_calico-system(677210a8-7e3f-4eb7-b133-c4888088b528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:50.600949 kubelet[2681]: E1123 22:49:50.600915 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8r8zz" podUID="677210a8-7e3f-4eb7-b133-c4888088b528" Nov 23 22:49:51.368788 containerd[1537]: time="2025-11-23T22:49:51.368678820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:49:51.369886 kubelet[2681]: E1123 22:49:51.369814 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c67458676-t5v8h" podUID="47c39918-1558-4b3b-ba95-44f9fa641dd2" Nov 23 22:49:51.555439 containerd[1537]: time="2025-11-23T22:49:51.555382342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:51.556472 containerd[1537]: time="2025-11-23T22:49:51.556385062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:49:51.556472 containerd[1537]: time="2025-11-23T22:49:51.556437501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:49:51.556668 kubelet[2681]: E1123 22:49:51.556615 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:49:51.556668 kubelet[2681]: E1123 22:49:51.556663 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:49:51.556748 kubelet[2681]: E1123 22:49:51.556735 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gdnbl_calico-system(12a478e1-2715-41a3-b494-6659c8d5a00c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:51.557824 containerd[1537]: time="2025-11-23T22:49:51.557792261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:49:51.772379 containerd[1537]: time="2025-11-23T22:49:51.772252737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:49:51.773410 containerd[1537]: time="2025-11-23T22:49:51.773357936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:49:51.773505 containerd[1537]: time="2025-11-23T22:49:51.773375976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:49:51.773709 kubelet[2681]: E1123 22:49:51.773654 2681 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:49:51.773709 kubelet[2681]: E1123 22:49:51.773708 2681 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:49:51.774020 kubelet[2681]: E1123 22:49:51.773822 2681 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gdnbl_calico-system(12a478e1-2715-41a3-b494-6659c8d5a00c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:49:51.774020 kubelet[2681]: E1123 22:49:51.773877 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gdnbl" podUID="12a478e1-2715-41a3-b494-6659c8d5a00c" Nov 23 22:49:54.645190 systemd[1]: Started sshd@17-10.0.0.9:22-10.0.0.1:47608.service - OpenSSH per-connection server daemon (10.0.0.1:47608). Nov 23 22:49:54.720468 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 47608 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:54.721525 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:54.726150 systemd-logind[1512]: New session 18 of user core. Nov 23 22:49:54.738802 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 22:49:54.889979 sshd[5272]: Connection closed by 10.0.0.1 port 47608 Nov 23 22:49:54.890583 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Nov 23 22:49:54.895032 systemd[1]: sshd@17-10.0.0.9:22-10.0.0.1:47608.service: Deactivated successfully. Nov 23 22:49:54.897324 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 22:49:54.900132 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. Nov 23 22:49:54.901372 systemd-logind[1512]: Removed session 18. Nov 23 22:49:55.367227 kubelet[2681]: E1123 22:49:55.367084 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-rjhsm" podUID="9b3997f9-79ca-4cb6-accf-cb8679793167" Nov 23 22:49:56.366881 kubelet[2681]: E1123 22:49:56.366818 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f859fc9fb-gcdhb" podUID="02c22706-9c4d-4a45-b31f-a84083423193" Nov 23 22:49:57.366889 kubelet[2681]: E1123 22:49:57.366749 2681 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f7b67686c-c7z45" podUID="927e7ec4-bea4-44ce-a267-dd04bc352b11" Nov 23 22:49:59.908331 systemd[1]: Started sshd@18-10.0.0.9:22-10.0.0.1:51736.service - OpenSSH per-connection server daemon (10.0.0.1:51736). Nov 23 22:49:59.982975 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 51736 ssh2: RSA SHA256:QxoOoLvgP9E+zipnRJ4K0FLuuw/ehjwLMaCJR2ynZa8 Nov 23 22:49:59.984760 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:49:59.992708 systemd-logind[1512]: New session 19 of user core. Nov 23 22:50:00.002744 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 22:50:00.145208 sshd[5298]: Connection closed by 10.0.0.1 port 51736 Nov 23 22:50:00.145689 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Nov 23 22:50:00.150445 systemd[1]: sshd@18-10.0.0.9:22-10.0.0.1:51736.service: Deactivated successfully. Nov 23 22:50:00.153692 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 22:50:00.155263 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. Nov 23 22:50:00.157502 systemd-logind[1512]: Removed session 19.