Nov 7 23:57:17.395338 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 7 23:57:17.395365 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Nov 7 22:24:06 -00 2025 Nov 7 23:57:17.395375 kernel: KASLR enabled Nov 7 23:57:17.395381 kernel: efi: EFI v2.7 by EDK II Nov 7 23:57:17.395387 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 7 23:57:17.395395 kernel: random: crng init done Nov 7 23:57:17.395403 kernel: secureboot: Secure boot disabled Nov 7 23:57:17.395409 kernel: ACPI: Early table checksum verification disabled Nov 7 23:57:17.395417 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 7 23:57:17.395424 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 7 23:57:17.395430 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395436 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395443 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395449 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395458 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395466 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395473 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395480 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395487 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 7 23:57:17.395494 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 7 23:57:17.395501 kernel: ACPI: Use ACPI SPCR as default console: No Nov 7 23:57:17.395507 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 7 23:57:17.395522 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 7 23:57:17.395529 kernel: Zone ranges: Nov 7 23:57:17.395536 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 7 23:57:17.395542 kernel: DMA32 empty Nov 7 23:57:17.395548 kernel: Normal empty Nov 7 23:57:17.395555 kernel: Device empty Nov 7 23:57:17.395562 kernel: Movable zone start for each node Nov 7 23:57:17.395568 kernel: Early memory node ranges Nov 7 23:57:17.395575 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 7 23:57:17.395581 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 7 23:57:17.395588 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 7 23:57:17.395595 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 7 23:57:17.395604 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 7 23:57:17.395610 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 7 23:57:17.395617 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 7 23:57:17.395624 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 7 23:57:17.395631 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 7 23:57:17.395638 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 7 23:57:17.395649 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 7 23:57:17.395656 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 7 23:57:17.395663 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 7 23:57:17.395670 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 7 23:57:17.395677 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 7 23:57:17.395683 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 7 23:57:17.395690 kernel: psci: probing for conduit method from ACPI. Nov 7 23:57:17.395697 kernel: psci: PSCIv1.1 detected in firmware. Nov 7 23:57:17.395706 kernel: psci: Using standard PSCI v0.2 function IDs Nov 7 23:57:17.395713 kernel: psci: Trusted OS migration not required Nov 7 23:57:17.395728 kernel: psci: SMC Calling Convention v1.1 Nov 7 23:57:17.395735 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 7 23:57:17.395742 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 7 23:57:17.395749 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 7 23:57:17.395756 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 7 23:57:17.395763 kernel: Detected PIPT I-cache on CPU0 Nov 7 23:57:17.395770 kernel: CPU features: detected: GIC system register CPU interface Nov 7 23:57:17.395777 kernel: CPU features: detected: Spectre-v4 Nov 7 23:57:17.395784 kernel: CPU features: detected: Spectre-BHB Nov 7 23:57:17.395792 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 7 23:57:17.395809 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 7 23:57:17.395816 kernel: CPU features: detected: ARM erratum 1418040 Nov 7 23:57:17.395823 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 7 23:57:17.395830 kernel: alternatives: applying boot alternatives Nov 7 23:57:17.395838 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8bfefa4d5bf8d825e537335d2d0fa0f6d70ecdd5bfc7a28e4bcd37bbf7abce90 Nov 7 23:57:17.395845 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 7 23:57:17.395853 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 7 23:57:17.395860 kernel: Fallback order for Node 0: 0 Nov 7 23:57:17.395867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 7 23:57:17.395876 kernel: Policy zone: DMA Nov 7 23:57:17.395883 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 7 23:57:17.395889 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 7 23:57:17.395897 kernel: software IO TLB: area num 4. Nov 7 23:57:17.395904 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 7 23:57:17.395912 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 7 23:57:17.395919 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 7 23:57:17.395926 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 7 23:57:17.395934 kernel: rcu: RCU event tracing is enabled. Nov 7 23:57:17.395941 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 7 23:57:17.395948 kernel: Trampoline variant of Tasks RCU enabled. Nov 7 23:57:17.395957 kernel: Tracing variant of Tasks RCU enabled. Nov 7 23:57:17.395964 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 7 23:57:17.395971 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 7 23:57:17.395978 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 7 23:57:17.395985 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 7 23:57:17.395993 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 7 23:57:17.396000 kernel: GICv3: 256 SPIs implemented Nov 7 23:57:17.396007 kernel: GICv3: 0 Extended SPIs implemented Nov 7 23:57:17.396014 kernel: Root IRQ handler: gic_handle_irq Nov 7 23:57:17.396021 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 7 23:57:17.396028 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 7 23:57:17.396037 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 7 23:57:17.396044 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 7 23:57:17.396051 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 7 23:57:17.396059 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 7 23:57:17.396066 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 7 23:57:17.396073 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 7 23:57:17.396079 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 7 23:57:17.396086 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:57:17.396093 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 7 23:57:17.396100 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 7 23:57:17.396108 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 7 23:57:17.396117 kernel: arm-pv: using stolen time PV Nov 7 23:57:17.396124 kernel: Console: colour dummy device 80x25 Nov 7 23:57:17.396132 kernel: ACPI: Core revision 20240827 Nov 7 23:57:17.396139 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 7 23:57:17.396147 kernel: pid_max: default: 32768 minimum: 301 Nov 7 23:57:17.396154 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 7 23:57:17.396161 kernel: landlock: Up and running. Nov 7 23:57:17.396168 kernel: SELinux: Initializing. Nov 7 23:57:17.396176 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 7 23:57:17.396183 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 7 23:57:17.396191 kernel: rcu: Hierarchical SRCU implementation. Nov 7 23:57:17.396198 kernel: rcu: Max phase no-delay instances is 400. Nov 7 23:57:17.396205 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 7 23:57:17.396212 kernel: Remapping and enabling EFI services. Nov 7 23:57:17.396220 kernel: smp: Bringing up secondary CPUs ... Nov 7 23:57:17.396228 kernel: Detected PIPT I-cache on CPU1 Nov 7 23:57:17.396240 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 7 23:57:17.396249 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 7 23:57:17.396257 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:57:17.396265 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 7 23:57:17.396272 kernel: Detected PIPT I-cache on CPU2 Nov 7 23:57:17.396280 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 7 23:57:17.396289 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 7 23:57:17.396297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:57:17.396304 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 7 23:57:17.396312 kernel: Detected PIPT I-cache on CPU3 Nov 7 23:57:17.396319 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 7 23:57:17.396327 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 7 23:57:17.396335 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 7 23:57:17.396344 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 7 23:57:17.396352 kernel: smp: Brought up 1 node, 4 CPUs Nov 7 23:57:17.396359 kernel: SMP: Total of 4 processors activated. Nov 7 23:57:17.396367 kernel: CPU: All CPU(s) started at EL1 Nov 7 23:57:17.396374 kernel: CPU features: detected: 32-bit EL0 Support Nov 7 23:57:17.396382 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 7 23:57:17.396390 kernel: CPU features: detected: Common not Private translations Nov 7 23:57:17.396399 kernel: CPU features: detected: CRC32 instructions Nov 7 23:57:17.396407 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 7 23:57:17.396414 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 7 23:57:17.396422 kernel: CPU features: detected: LSE atomic instructions Nov 7 23:57:17.396430 kernel: CPU features: detected: Privileged Access Never Nov 7 23:57:17.396437 kernel: CPU features: detected: RAS Extension Support Nov 7 23:57:17.396445 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 7 23:57:17.396452 kernel: alternatives: applying system-wide alternatives Nov 7 23:57:17.396461 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 7 23:57:17.396469 kernel: Memory: 2450272K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 13120K init, 1038K bss, 99680K reserved, 16384K cma-reserved) Nov 7 23:57:17.396477 kernel: devtmpfs: initialized Nov 7 23:57:17.396484 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 7 23:57:17.396492 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 7 23:57:17.396500 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 7 23:57:17.396508 kernel: 0 pages in range for non-PLT usage Nov 7 23:57:17.396516 kernel: 515024 pages in range for PLT usage Nov 7 23:57:17.396524 kernel: pinctrl core: initialized pinctrl subsystem Nov 7 23:57:17.396531 kernel: SMBIOS 3.0.0 present. Nov 7 23:57:17.396539 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 7 23:57:17.396547 kernel: DMI: Memory slots populated: 1/1 Nov 7 23:57:17.396554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 7 23:57:17.396568 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 7 23:57:17.396577 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 7 23:57:17.396585 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 7 23:57:17.396593 kernel: audit: initializing netlink subsys (disabled) Nov 7 23:57:17.396601 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Nov 7 23:57:17.396609 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 7 23:57:17.396617 kernel: cpuidle: using governor menu Nov 7 23:57:17.396625 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 7 23:57:17.396634 kernel: ASID allocator initialised with 32768 entries Nov 7 23:57:17.396641 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 7 23:57:17.396649 kernel: Serial: AMBA PL011 UART driver Nov 7 23:57:17.396657 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 7 23:57:17.396665 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 7 23:57:17.396672 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 7 23:57:17.396680 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 7 23:57:17.396688 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 7 23:57:17.396696 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 7 23:57:17.396704 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 7 23:57:17.396711 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 7 23:57:17.396723 kernel: ACPI: Added _OSI(Module Device) Nov 7 23:57:17.396731 kernel: ACPI: Added _OSI(Processor Device) Nov 7 23:57:17.396738 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 7 23:57:17.396746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 7 23:57:17.396754 kernel: ACPI: Interpreter enabled Nov 7 23:57:17.396762 kernel: ACPI: Using GIC for interrupt routing Nov 7 23:57:17.396770 kernel: ACPI: MCFG table detected, 1 entries Nov 7 23:57:17.396778 kernel: ACPI: CPU0 has been hot-added Nov 7 23:57:17.396785 kernel: ACPI: CPU1 has been hot-added Nov 7 23:57:17.396793 kernel: ACPI: CPU2 has been hot-added Nov 7 23:57:17.396817 kernel: ACPI: CPU3 has been hot-added Nov 7 23:57:17.396827 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 7 23:57:17.396834 kernel: printk: legacy console [ttyAMA0] enabled Nov 7 23:57:17.396842 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 7 23:57:17.397029 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 7 23:57:17.397117 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 7 23:57:17.397199 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 7 23:57:17.397284 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 7 23:57:17.397368 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 7 23:57:17.397378 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 7 23:57:17.397386 kernel: PCI host bridge to bus 0000:00 Nov 7 23:57:17.397477 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 7 23:57:17.397571 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 7 23:57:17.397660 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 7 23:57:17.397742 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 7 23:57:17.397863 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 7 23:57:17.397962 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 7 23:57:17.398048 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 7 23:57:17.398127 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 7 23:57:17.398211 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 7 23:57:17.398293 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 7 23:57:17.398376 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 7 23:57:17.398460 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 7 23:57:17.398540 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 7 23:57:17.398616 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 7 23:57:17.398697 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 7 23:57:17.398707 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 7 23:57:17.398723 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 7 23:57:17.398733 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 7 23:57:17.398741 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 7 23:57:17.398749 kernel: iommu: Default domain type: Translated Nov 7 23:57:17.398761 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 7 23:57:17.398770 kernel: efivars: Registered efivars operations Nov 7 23:57:17.398778 kernel: vgaarb: loaded Nov 7 23:57:17.398786 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 7 23:57:17.398814 kernel: VFS: Disk quotas dquot_6.6.0 Nov 7 23:57:17.398825 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 7 23:57:17.398841 kernel: pnp: PnP ACPI init Nov 7 23:57:17.398957 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 7 23:57:17.398969 kernel: pnp: PnP ACPI: found 1 devices Nov 7 23:57:17.398977 kernel: NET: Registered PF_INET protocol family Nov 7 23:57:17.398985 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 7 23:57:17.398993 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 7 23:57:17.399002 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 7 23:57:17.399010 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 7 23:57:17.399020 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 7 23:57:17.399028 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 7 23:57:17.399036 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 7 23:57:17.399044 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 7 23:57:17.399052 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 7 23:57:17.399061 kernel: PCI: CLS 0 bytes, default 64 Nov 7 23:57:17.399068 kernel: kvm [1]: HYP mode not available Nov 7 23:57:17.399078 kernel: Initialise system trusted keyrings Nov 7 23:57:17.399086 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 7 23:57:17.399094 kernel: Key type asymmetric registered Nov 7 23:57:17.399101 kernel: Asymmetric key parser 'x509' registered Nov 7 23:57:17.399109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 7 23:57:17.399117 kernel: io scheduler mq-deadline registered Nov 7 23:57:17.399125 kernel: io scheduler kyber registered Nov 7 23:57:17.399134 kernel: io scheduler bfq registered Nov 7 23:57:17.399142 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 7 23:57:17.399150 kernel: ACPI: button: Power Button [PWRB] Nov 7 23:57:17.399159 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 7 23:57:17.399248 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 7 23:57:17.399259 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 7 23:57:17.399267 kernel: thunder_xcv, ver 1.0 Nov 7 23:57:17.399277 kernel: thunder_bgx, ver 1.0 Nov 7 23:57:17.399285 kernel: nicpf, ver 1.0 Nov 7 23:57:17.399292 kernel: nicvf, ver 1.0 Nov 7 23:57:17.399388 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 7 23:57:17.399474 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-07T23:57:16 UTC (1762559836) Nov 7 23:57:17.399485 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 7 23:57:17.399493 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 7 23:57:17.399503 kernel: watchdog: NMI not fully supported Nov 7 23:57:17.399512 kernel: watchdog: Hard watchdog permanently disabled Nov 7 23:57:17.399520 kernel: NET: Registered PF_INET6 protocol family Nov 7 23:57:17.399527 kernel: Segment Routing with IPv6 Nov 7 23:57:17.399535 kernel: In-situ OAM (IOAM) with IPv6 Nov 7 23:57:17.399543 kernel: NET: Registered PF_PACKET protocol family Nov 7 23:57:17.399551 kernel: Key type dns_resolver registered Nov 7 23:57:17.399561 kernel: registered taskstats version 1 Nov 7 23:57:17.399569 kernel: Loading compiled-in X.509 certificates Nov 7 23:57:17.399577 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ebe7e9737da4c34f192c530d79f3cb246d03fd74' Nov 7 23:57:17.399585 kernel: Demotion targets for Node 0: null Nov 7 23:57:17.399593 kernel: Key type .fscrypt registered Nov 7 23:57:17.399601 kernel: Key type fscrypt-provisioning registered Nov 7 23:57:17.399614 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 7 23:57:17.399625 kernel: ima: Allocated hash algorithm: sha1 Nov 7 23:57:17.399635 kernel: ima: No architecture policies found Nov 7 23:57:17.399646 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 7 23:57:17.399656 kernel: clk: Disabling unused clocks Nov 7 23:57:17.399666 kernel: PM: genpd: Disabling unused power domains Nov 7 23:57:17.399674 kernel: Freeing unused kernel memory: 13120K Nov 7 23:57:17.399682 kernel: Run /init as init process Nov 7 23:57:17.399691 kernel: with arguments: Nov 7 23:57:17.399699 kernel: /init Nov 7 23:57:17.399706 kernel: with environment: Nov 7 23:57:17.399714 kernel: HOME=/ Nov 7 23:57:17.399729 kernel: TERM=linux Nov 7 23:57:17.399858 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 7 23:57:17.399947 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 7 23:57:17.399960 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 7 23:57:17.399969 kernel: GPT:16515071 != 27000831 Nov 7 23:57:17.399977 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 7 23:57:17.399985 kernel: GPT:16515071 != 27000831 Nov 7 23:57:17.399992 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 7 23:57:17.400000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 7 23:57:17.400010 kernel: SCSI subsystem initialized Nov 7 23:57:17.400019 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 7 23:57:17.400027 kernel: device-mapper: uevent: version 1.0.3 Nov 7 23:57:17.400035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 7 23:57:17.400043 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 7 23:57:17.400051 kernel: raid6: neonx8 gen() 15733 MB/s Nov 7 23:57:17.400059 kernel: raid6: neonx4 gen() 15776 MB/s Nov 7 23:57:17.400068 kernel: raid6: neonx2 gen() 13204 MB/s Nov 7 23:57:17.400076 kernel: raid6: neonx1 gen() 10527 MB/s Nov 7 23:57:17.400084 kernel: raid6: int64x8 gen() 6896 MB/s Nov 7 23:57:17.400093 kernel: raid6: int64x4 gen() 7322 MB/s Nov 7 23:57:17.400101 kernel: raid6: int64x2 gen() 6084 MB/s Nov 7 23:57:17.400109 kernel: raid6: int64x1 gen() 5039 MB/s Nov 7 23:57:17.400116 kernel: raid6: using algorithm neonx4 gen() 15776 MB/s Nov 7 23:57:17.400124 kernel: raid6: .... xor() 12232 MB/s, rmw enabled Nov 7 23:57:17.400134 kernel: raid6: using neon recovery algorithm Nov 7 23:57:17.400141 kernel: xor: measuring software checksum speed Nov 7 23:57:17.400149 kernel: 8regs : 20619 MB/sec Nov 7 23:57:17.400157 kernel: 32regs : 21590 MB/sec Nov 7 23:57:17.400165 kernel: arm64_neon : 27785 MB/sec Nov 7 23:57:17.400172 kernel: xor: using function: arm64_neon (27785 MB/sec) Nov 7 23:57:17.400180 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 7 23:57:17.400190 kernel: BTRFS: device fsid 55631b0a-1ca9-4494-9c87-5a8b2623813a devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (206) Nov 7 23:57:17.400198 kernel: BTRFS info (device dm-0): first mount of filesystem 55631b0a-1ca9-4494-9c87-5a8b2623813a Nov 7 23:57:17.400207 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:57:17.400216 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 7 23:57:17.400224 kernel: BTRFS info (device dm-0): enabling free space tree Nov 7 23:57:17.400231 kernel: loop: module loaded Nov 7 23:57:17.400239 kernel: loop0: detected capacity change from 0 to 91464 Nov 7 23:57:17.400249 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 7 23:57:17.400258 systemd[1]: Successfully made /usr/ read-only. Nov 7 23:57:17.400269 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 7 23:57:17.400278 systemd[1]: Detected virtualization kvm. Nov 7 23:57:17.400286 systemd[1]: Detected architecture arm64. Nov 7 23:57:17.400294 systemd[1]: Running in initrd. Nov 7 23:57:17.400304 systemd[1]: No hostname configured, using default hostname. Nov 7 23:57:17.400313 systemd[1]: Hostname set to . Nov 7 23:57:17.400321 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 7 23:57:17.400330 systemd[1]: Queued start job for default target initrd.target. Nov 7 23:57:17.400339 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 7 23:57:17.400347 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 7 23:57:17.400357 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 7 23:57:17.400367 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 7 23:57:17.400376 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 7 23:57:17.400386 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 7 23:57:17.400394 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 7 23:57:17.400405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 7 23:57:17.400413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 7 23:57:17.400422 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 7 23:57:17.400431 systemd[1]: Reached target paths.target - Path Units. Nov 7 23:57:17.400439 systemd[1]: Reached target slices.target - Slice Units. Nov 7 23:57:17.400448 systemd[1]: Reached target swap.target - Swaps. Nov 7 23:57:17.400456 systemd[1]: Reached target timers.target - Timer Units. Nov 7 23:57:17.400466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 7 23:57:17.400475 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 7 23:57:17.400484 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 7 23:57:17.400493 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 7 23:57:17.400508 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 7 23:57:17.400521 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 7 23:57:17.400534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 7 23:57:17.400543 systemd[1]: Reached target sockets.target - Socket Units. Nov 7 23:57:17.400552 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 7 23:57:17.400561 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 7 23:57:17.400570 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 7 23:57:17.400578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 7 23:57:17.400588 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 7 23:57:17.400597 systemd[1]: Starting systemd-fsck-usr.service... Nov 7 23:57:17.400606 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 7 23:57:17.400615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 7 23:57:17.400624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:57:17.400634 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 7 23:57:17.400643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 7 23:57:17.400652 systemd[1]: Finished systemd-fsck-usr.service. Nov 7 23:57:17.400682 systemd-journald[345]: Collecting audit messages is disabled. Nov 7 23:57:17.400704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 7 23:57:17.400714 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 7 23:57:17.400731 kernel: Bridge firewalling registered Nov 7 23:57:17.400741 systemd-journald[345]: Journal started Nov 7 23:57:17.400762 systemd-journald[345]: Runtime Journal (/run/log/journal/7cd6f2838be141d7b96c1dfcc09ae13e) is 6M, max 48.5M, 42.4M free. Nov 7 23:57:17.400470 systemd-modules-load[347]: Inserted module 'br_netfilter' Nov 7 23:57:17.411031 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 7 23:57:17.414707 systemd[1]: Started systemd-journald.service - Journal Service. Nov 7 23:57:17.416911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:57:17.419643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 7 23:57:17.423709 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 7 23:57:17.425932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 7 23:57:17.428968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 7 23:57:17.436721 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 7 23:57:17.445621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 7 23:57:17.449258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 7 23:57:17.452170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 7 23:57:17.453691 systemd-tmpfiles[371]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 7 23:57:17.458200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 7 23:57:17.459924 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 7 23:57:17.467526 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 7 23:57:17.503806 dracut-cmdline[390]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8bfefa4d5bf8d825e537335d2d0fa0f6d70ecdd5bfc7a28e4bcd37bbf7abce90 Nov 7 23:57:17.516218 systemd-resolved[384]: Positive Trust Anchors: Nov 7 23:57:17.516236 systemd-resolved[384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 7 23:57:17.516239 systemd-resolved[384]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 7 23:57:17.516271 systemd-resolved[384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 7 23:57:17.539834 systemd-resolved[384]: Defaulting to hostname 'linux'. Nov 7 23:57:17.541147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 7 23:57:17.542467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 7 23:57:17.607821 kernel: Loading iSCSI transport class v2.0-870. Nov 7 23:57:17.618816 kernel: iscsi: registered transport (tcp) Nov 7 23:57:17.633846 kernel: iscsi: registered transport (qla4xxx) Nov 7 23:57:17.633879 kernel: QLogic iSCSI HBA Driver Nov 7 23:57:17.656582 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 7 23:57:17.678701 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 7 23:57:17.681821 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 7 23:57:17.733199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 7 23:57:17.736201 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 7 23:57:17.738167 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 7 23:57:17.775019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 7 23:57:17.777828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 7 23:57:17.814368 systemd-udevd[629]: Using default interface naming scheme 'v257'. Nov 7 23:57:17.822732 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 7 23:57:17.827382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 7 23:57:17.855394 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 7 23:57:17.859556 dracut-pre-trigger[699]: rd.md=0: removing MD RAID activation Nov 7 23:57:17.859793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 7 23:57:17.888426 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 7 23:57:17.890903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 7 23:57:17.907819 systemd-networkd[740]: lo: Link UP Nov 7 23:57:17.907830 systemd-networkd[740]: lo: Gained carrier Nov 7 23:57:17.908432 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 7 23:57:17.910085 systemd[1]: Reached target network.target - Network. Nov 7 23:57:17.950034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 7 23:57:17.953510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 7 23:57:18.011261 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 7 23:57:18.019335 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 7 23:57:18.025999 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 7 23:57:18.034288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 7 23:57:18.036575 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 7 23:57:18.053684 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 7 23:57:18.053850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:57:18.055199 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:57:18.061262 systemd-networkd[740]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:57:18.061275 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 7 23:57:18.061397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:57:18.070733 disk-uuid[802]: Primary Header is updated. Nov 7 23:57:18.070733 disk-uuid[802]: Secondary Entries is updated. Nov 7 23:57:18.070733 disk-uuid[802]: Secondary Header is updated. Nov 7 23:57:18.061908 systemd-networkd[740]: eth0: Link UP Nov 7 23:57:18.062164 systemd-networkd[740]: eth0: Gained carrier Nov 7 23:57:18.062176 systemd-networkd[740]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:57:18.078887 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 7 23:57:18.102585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:57:18.138549 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 7 23:57:18.140024 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 7 23:57:18.143489 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 7 23:57:18.144874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 7 23:57:18.147999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 7 23:57:18.188868 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 7 23:57:18.353079 systemd-resolved[384]: Detected conflict on linux IN A 10.0.0.69 Nov 7 23:57:18.353096 systemd-resolved[384]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Nov 7 23:57:19.104639 disk-uuid[803]: Warning: The kernel is still using the old partition table. Nov 7 23:57:19.104639 disk-uuid[803]: The new table will be used at the next reboot or after you Nov 7 23:57:19.104639 disk-uuid[803]: run partprobe(8) or kpartx(8) Nov 7 23:57:19.104639 disk-uuid[803]: The operation has completed successfully. Nov 7 23:57:19.112915 systemd-networkd[740]: eth0: Gained IPv6LL Nov 7 23:57:19.115904 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 7 23:57:19.117071 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 7 23:57:19.119400 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 7 23:57:19.146054 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Nov 7 23:57:19.146111 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:57:19.147535 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:57:19.150329 kernel: BTRFS info (device vda6): turning on async discard Nov 7 23:57:19.150378 kernel: BTRFS info (device vda6): enabling free space tree Nov 7 23:57:19.156812 kernel: BTRFS info (device vda6): last unmount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:57:19.157047 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 7 23:57:19.159540 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 7 23:57:19.252382 ignition[851]: Ignition 2.22.0 Nov 7 23:57:19.252399 ignition[851]: Stage: fetch-offline Nov 7 23:57:19.252437 ignition[851]: no configs at "/usr/lib/ignition/base.d" Nov 7 23:57:19.252447 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:57:19.252527 ignition[851]: parsed url from cmdline: "" Nov 7 23:57:19.252530 ignition[851]: no config URL provided Nov 7 23:57:19.252534 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Nov 7 23:57:19.252543 ignition[851]: no config at "/usr/lib/ignition/user.ign" Nov 7 23:57:19.252584 ignition[851]: op(1): [started] loading QEMU firmware config module Nov 7 23:57:19.252588 ignition[851]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 7 23:57:19.258023 ignition[851]: op(1): [finished] loading QEMU firmware config module Nov 7 23:57:19.304357 ignition[851]: parsing config with SHA512: 94ce29515dbf9973d2fe8049d1b00aaebdd7684d80de1d4744ac91b3523388faf323310ebccbb6633ed4f200ff67c3151461cae1620fe2881fea6b7ec0e97567 Nov 7 23:57:19.309534 unknown[851]: fetched base config from "system" Nov 7 23:57:19.309546 unknown[851]: fetched user config from "qemu" Nov 7 23:57:19.310011 ignition[851]: fetch-offline: fetch-offline passed Nov 7 23:57:19.310072 ignition[851]: Ignition finished successfully Nov 7 23:57:19.312315 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 7 23:57:19.314247 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 7 23:57:19.315127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 7 23:57:19.352963 ignition[862]: Ignition 2.22.0 Nov 7 23:57:19.352981 ignition[862]: Stage: kargs Nov 7 23:57:19.353122 ignition[862]: no configs at "/usr/lib/ignition/base.d" Nov 7 23:57:19.353129 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:57:19.353935 ignition[862]: kargs: kargs passed Nov 7 23:57:19.353982 ignition[862]: Ignition finished successfully Nov 7 23:57:19.358887 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 7 23:57:19.361704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 7 23:57:19.393414 ignition[869]: Ignition 2.22.0 Nov 7 23:57:19.393433 ignition[869]: Stage: disks Nov 7 23:57:19.393581 ignition[869]: no configs at "/usr/lib/ignition/base.d" Nov 7 23:57:19.393590 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:57:19.396515 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 7 23:57:19.394812 ignition[869]: disks: disks passed Nov 7 23:57:19.398409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 7 23:57:19.394889 ignition[869]: Ignition finished successfully Nov 7 23:57:19.401013 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 7 23:57:19.403806 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 7 23:57:19.405685 systemd[1]: Reached target sysinit.target - System Initialization. Nov 7 23:57:19.408027 systemd[1]: Reached target basic.target - Basic System. Nov 7 23:57:19.410899 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 7 23:57:19.451899 systemd-fsck[879]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 7 23:57:19.466886 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 7 23:57:19.470284 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 7 23:57:19.551829 kernel: EXT4-fs (vda9): mounted filesystem 12d1c98d-1cd5-4af6-bfe4-c8600a1c2a61 r/w with ordered data mode. Quota mode: none. Nov 7 23:57:19.552498 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 7 23:57:19.553994 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 7 23:57:19.556929 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 7 23:57:19.558849 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 7 23:57:19.560042 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 7 23:57:19.560095 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 7 23:57:19.560133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 7 23:57:19.569784 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 7 23:57:19.573953 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 7 23:57:19.576617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Nov 7 23:57:19.579575 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:57:19.579597 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:57:19.582816 kernel: BTRFS info (device vda6): turning on async discard Nov 7 23:57:19.582846 kernel: BTRFS info (device vda6): enabling free space tree Nov 7 23:57:19.584232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 7 23:57:19.622910 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Nov 7 23:57:19.626655 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Nov 7 23:57:19.631823 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Nov 7 23:57:19.635490 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Nov 7 23:57:19.721130 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 7 23:57:19.723749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 7 23:57:19.725597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 7 23:57:19.744073 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 7 23:57:19.749808 kernel: BTRFS info (device vda6): last unmount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:57:19.761331 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 7 23:57:19.782220 ignition[1001]: INFO : Ignition 2.22.0 Nov 7 23:57:19.782220 ignition[1001]: INFO : Stage: mount Nov 7 23:57:19.784007 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 7 23:57:19.784007 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:57:19.784007 ignition[1001]: INFO : mount: mount passed Nov 7 23:57:19.784007 ignition[1001]: INFO : Ignition finished successfully Nov 7 23:57:19.786056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 7 23:57:19.788614 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 7 23:57:20.554735 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 7 23:57:20.585836 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Nov 7 23:57:20.585902 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 7 23:57:20.589789 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 7 23:57:20.595809 kernel: BTRFS info (device vda6): turning on async discard Nov 7 23:57:20.595892 kernel: BTRFS info (device vda6): enabling free space tree Nov 7 23:57:20.597423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 7 23:57:20.630952 ignition[1030]: INFO : Ignition 2.22.0 Nov 7 23:57:20.630952 ignition[1030]: INFO : Stage: files Nov 7 23:57:20.633225 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 7 23:57:20.633225 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:57:20.633225 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Nov 7 23:57:20.637899 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 7 23:57:20.637899 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 7 23:57:20.641642 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 7 23:57:20.644912 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 7 23:57:20.646897 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 7 23:57:20.645477 unknown[1030]: wrote ssh authorized keys file for user: core Nov 7 23:57:20.652418 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 7 23:57:20.652418 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 7 23:57:20.714742 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 7 23:57:20.875895 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 7 23:57:20.893926 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 7 23:57:20.893926 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 7 23:57:20.893926 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 7 23:57:20.893926 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 7 23:57:20.893926 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 7 23:57:20.893926 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 7 23:57:20.893926 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 7 23:57:21.246069 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 7 23:57:21.615052 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 7 23:57:21.615052 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 7 23:57:21.618947 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 7 23:57:21.706520 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 7 23:57:21.706520 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 7 23:57:21.706520 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 7 23:57:21.706520 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 7 23:57:21.714093 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 7 23:57:21.714093 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 7 23:57:21.714093 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 7 23:57:21.726447 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 7 23:57:21.729710 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 7 23:57:21.731381 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 7 23:57:21.731381 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 7 23:57:21.731381 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 7 23:57:21.731381 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 7 23:57:21.731381 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 7 23:57:21.731381 ignition[1030]: INFO : files: files passed Nov 7 23:57:21.731381 ignition[1030]: INFO : Ignition finished successfully Nov 7 23:57:21.732092 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 7 23:57:21.735129 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 7 23:57:21.737394 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 7 23:57:21.752928 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 7 23:57:21.753013 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 7 23:57:21.757553 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Nov 7 23:57:21.761551 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 7 23:57:21.763355 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 7 23:57:21.764969 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 7 23:57:21.764114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 7 23:57:21.766339 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 7 23:57:21.769684 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 7 23:57:21.815915 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 7 23:57:21.816042 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 7 23:57:21.820118 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 7 23:57:21.822059 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 7 23:57:21.824241 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 7 23:57:21.825129 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 7 23:57:21.856056 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 7 23:57:21.858665 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 7 23:57:21.877117 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 7 23:57:21.877266 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 7 23:57:21.879727 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 7 23:57:21.884340 systemd[1]: Stopped target timers.target - Timer Units. Nov 7 23:57:21.886267 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 7 23:57:21.886408 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 7 23:57:21.889072 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 7 23:57:21.891250 systemd[1]: Stopped target basic.target - Basic System. Nov 7 23:57:21.893033 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 7 23:57:21.896099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 7 23:57:21.898260 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 7 23:57:21.900648 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 7 23:57:21.902961 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 7 23:57:21.905019 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 7 23:57:21.907262 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 7 23:57:21.909377 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 7 23:57:21.911291 systemd[1]: Stopped target swap.target - Swaps. Nov 7 23:57:21.912914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 7 23:57:21.913053 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 7 23:57:21.915556 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 7 23:57:21.917764 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 7 23:57:21.919897 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 7 23:57:21.920914 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 7 23:57:21.923151 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 7 23:57:21.923285 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 7 23:57:21.926381 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 7 23:57:21.926508 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 7 23:57:21.928713 systemd[1]: Stopped target paths.target - Path Units. Nov 7 23:57:21.930410 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 7 23:57:21.935859 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 7 23:57:21.937273 systemd[1]: Stopped target slices.target - Slice Units. Nov 7 23:57:21.939513 systemd[1]: Stopped target sockets.target - Socket Units. Nov 7 23:57:21.941335 systemd[1]: iscsid.socket: Deactivated successfully. Nov 7 23:57:21.941431 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 7 23:57:21.943045 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 7 23:57:21.943133 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 7 23:57:21.944782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 7 23:57:21.944923 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 7 23:57:21.946972 systemd[1]: ignition-files.service: Deactivated successfully. Nov 7 23:57:21.947081 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 7 23:57:21.950950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 7 23:57:21.952638 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 7 23:57:21.952791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 7 23:57:21.962430 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 7 23:57:21.963379 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 7 23:57:21.963520 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 7 23:57:21.965899 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 7 23:57:21.966023 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 7 23:57:21.968347 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 7 23:57:21.968452 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 7 23:57:21.975385 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 7 23:57:21.976843 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 7 23:57:21.981047 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 7 23:57:21.982022 ignition[1088]: INFO : Ignition 2.22.0 Nov 7 23:57:21.982022 ignition[1088]: INFO : Stage: umount Nov 7 23:57:21.982022 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 7 23:57:21.982022 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 7 23:57:21.989832 ignition[1088]: INFO : umount: umount passed Nov 7 23:57:21.989832 ignition[1088]: INFO : Ignition finished successfully Nov 7 23:57:21.986645 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 7 23:57:21.986764 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 7 23:57:21.988830 systemd[1]: Stopped target network.target - Network. Nov 7 23:57:21.990710 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 7 23:57:21.990838 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 7 23:57:21.992869 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 7 23:57:21.992931 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 7 23:57:21.994846 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 7 23:57:21.994900 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 7 23:57:21.996882 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 7 23:57:21.996928 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 7 23:57:21.999113 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 7 23:57:22.001014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 7 23:57:22.007542 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 7 23:57:22.007669 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 7 23:57:22.019636 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 7 23:57:22.019809 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 7 23:57:22.024456 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 7 23:57:22.024577 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 7 23:57:22.027133 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 7 23:57:22.028512 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 7 23:57:22.028553 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 7 23:57:22.030623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 7 23:57:22.030681 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 7 23:57:22.033377 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 7 23:57:22.034554 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 7 23:57:22.034633 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 7 23:57:22.036894 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 7 23:57:22.036943 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 7 23:57:22.038905 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 7 23:57:22.038951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 7 23:57:22.040934 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 7 23:57:22.058302 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 7 23:57:22.058460 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 7 23:57:22.061034 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 7 23:57:22.061076 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 7 23:57:22.063009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 7 23:57:22.063042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 7 23:57:22.064996 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 7 23:57:22.065048 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 7 23:57:22.067980 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 7 23:57:22.068038 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 7 23:57:22.070831 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 7 23:57:22.070955 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 7 23:57:22.076427 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 7 23:57:22.078593 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 7 23:57:22.078656 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 7 23:57:22.081107 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 7 23:57:22.081156 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 7 23:57:22.084028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 7 23:57:22.084084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:57:22.087109 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 7 23:57:22.091950 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 7 23:57:22.097870 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 7 23:57:22.098002 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 7 23:57:22.100321 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 7 23:57:22.102971 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 7 23:57:22.124870 systemd[1]: Switching root. Nov 7 23:57:22.149593 systemd-journald[345]: Journal stopped Nov 7 23:57:23.109261 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Nov 7 23:57:23.109311 kernel: SELinux: policy capability network_peer_controls=1 Nov 7 23:57:23.109326 kernel: SELinux: policy capability open_perms=1 Nov 7 23:57:23.109338 kernel: SELinux: policy capability extended_socket_class=1 Nov 7 23:57:23.109351 kernel: SELinux: policy capability always_check_network=0 Nov 7 23:57:23.109363 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 7 23:57:23.109373 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 7 23:57:23.109384 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 7 23:57:23.109393 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 7 23:57:23.109403 kernel: SELinux: policy capability userspace_initial_context=0 Nov 7 23:57:23.109415 kernel: audit: type=1403 audit(1762559842.378:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 7 23:57:23.109426 systemd[1]: Successfully loaded SELinux policy in 60.995ms. Nov 7 23:57:23.109444 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.738ms. Nov 7 23:57:23.109455 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 7 23:57:23.109468 systemd[1]: Detected virtualization kvm. Nov 7 23:57:23.109479 systemd[1]: Detected architecture arm64. Nov 7 23:57:23.109490 systemd[1]: Detected first boot. Nov 7 23:57:23.109500 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 7 23:57:23.109511 zram_generator::config[1135]: No configuration found. Nov 7 23:57:23.109522 kernel: NET: Registered PF_VSOCK protocol family Nov 7 23:57:23.109533 systemd[1]: Populated /etc with preset unit settings. Nov 7 23:57:23.109544 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 7 23:57:23.109554 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 7 23:57:23.109565 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 7 23:57:23.109576 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 7 23:57:23.109587 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 7 23:57:23.109597 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 7 23:57:23.109611 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 7 23:57:23.109622 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 7 23:57:23.109632 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 7 23:57:23.109643 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 7 23:57:23.109653 systemd[1]: Created slice user.slice - User and Session Slice. Nov 7 23:57:23.109664 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 7 23:57:23.109675 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 7 23:57:23.109685 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 7 23:57:23.109697 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 7 23:57:23.109708 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 7 23:57:23.109728 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 7 23:57:23.109741 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 7 23:57:23.109751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 7 23:57:23.109762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 7 23:57:23.109774 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 7 23:57:23.109786 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 7 23:57:23.109935 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 7 23:57:23.109955 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 7 23:57:23.109967 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 7 23:57:23.109977 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 7 23:57:23.109988 systemd[1]: Reached target slices.target - Slice Units. Nov 7 23:57:23.110003 systemd[1]: Reached target swap.target - Swaps. Nov 7 23:57:23.110014 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 7 23:57:23.110025 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 7 23:57:23.110035 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 7 23:57:23.110045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 7 23:57:23.110056 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 7 23:57:23.110067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 7 23:57:23.110080 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 7 23:57:23.110090 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 7 23:57:23.110101 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 7 23:57:23.110111 systemd[1]: Mounting media.mount - External Media Directory... Nov 7 23:57:23.110122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 7 23:57:23.110134 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 7 23:57:23.110145 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 7 23:57:23.110157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 7 23:57:23.110168 systemd[1]: Reached target machines.target - Containers. Nov 7 23:57:23.110179 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 7 23:57:23.110190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 7 23:57:23.110200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 7 23:57:23.110211 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 7 23:57:23.110222 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 7 23:57:23.110234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 7 23:57:23.110245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 7 23:57:23.110255 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 7 23:57:23.110267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 7 23:57:23.110278 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 7 23:57:23.110289 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 7 23:57:23.110300 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 7 23:57:23.110311 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 7 23:57:23.110321 systemd[1]: Stopped systemd-fsck-usr.service. Nov 7 23:57:23.110332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 7 23:57:23.110343 kernel: fuse: init (API version 7.41) Nov 7 23:57:23.110354 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 7 23:57:23.110365 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 7 23:57:23.110376 kernel: ACPI: bus type drm_connector registered Nov 7 23:57:23.110387 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 7 23:57:23.110412 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 7 23:57:23.110422 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 7 23:57:23.110433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 7 23:57:23.110470 systemd-journald[1217]: Collecting audit messages is disabled. Nov 7 23:57:23.110494 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 7 23:57:23.110505 systemd-journald[1217]: Journal started Nov 7 23:57:23.110528 systemd-journald[1217]: Runtime Journal (/run/log/journal/7cd6f2838be141d7b96c1dfcc09ae13e) is 6M, max 48.5M, 42.4M free. Nov 7 23:57:22.871149 systemd[1]: Queued start job for default target multi-user.target. Nov 7 23:57:22.891905 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 7 23:57:22.892371 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 7 23:57:23.113211 systemd[1]: Started systemd-journald.service - Journal Service. Nov 7 23:57:23.114163 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 7 23:57:23.115618 systemd[1]: Mounted media.mount - External Media Directory. Nov 7 23:57:23.116868 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 7 23:57:23.118243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 7 23:57:23.119574 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 7 23:57:23.120969 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 7 23:57:23.122535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 7 23:57:23.124197 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 7 23:57:23.124365 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 7 23:57:23.125961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 7 23:57:23.126868 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 7 23:57:23.128333 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 7 23:57:23.128500 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 7 23:57:23.129984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 7 23:57:23.130142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 7 23:57:23.131887 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 7 23:57:23.132066 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 7 23:57:23.133574 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 7 23:57:23.134842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 7 23:57:23.136584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 7 23:57:23.138228 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 7 23:57:23.141855 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 7 23:57:23.143778 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 7 23:57:23.156661 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 7 23:57:23.158672 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 7 23:57:23.161203 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 7 23:57:23.163404 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 7 23:57:23.164733 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 7 23:57:23.164774 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 7 23:57:23.167146 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 7 23:57:23.168677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 7 23:57:23.177655 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 7 23:57:23.180573 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 7 23:57:23.182065 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 7 23:57:23.183039 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 7 23:57:23.184341 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 7 23:57:23.185315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 7 23:57:23.191418 systemd-journald[1217]: Time spent on flushing to /var/log/journal/7cd6f2838be141d7b96c1dfcc09ae13e is 12.369ms for 870 entries. Nov 7 23:57:23.191418 systemd-journald[1217]: System Journal (/var/log/journal/7cd6f2838be141d7b96c1dfcc09ae13e) is 8M, max 163.5M, 155.5M free. Nov 7 23:57:23.212380 systemd-journald[1217]: Received client request to flush runtime journal. Nov 7 23:57:23.212415 kernel: loop1: detected capacity change from 0 to 211168 Nov 7 23:57:23.191479 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 7 23:57:23.195571 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 7 23:57:23.200081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 7 23:57:23.202431 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 7 23:57:23.203918 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 7 23:57:23.205483 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 7 23:57:23.209138 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 7 23:57:23.214743 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 7 23:57:23.216742 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 7 23:57:23.218895 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 7 23:57:23.240823 kernel: loop2: detected capacity change from 0 to 119832 Nov 7 23:57:23.240775 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 7 23:57:23.242618 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 7 23:57:23.247569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 7 23:57:23.250253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 7 23:57:23.261363 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 7 23:57:23.273848 kernel: loop3: detected capacity change from 0 to 100624 Nov 7 23:57:23.279007 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Nov 7 23:57:23.279022 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Nov 7 23:57:23.283983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 7 23:57:23.294129 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 7 23:57:23.301825 kernel: loop4: detected capacity change from 0 to 211168 Nov 7 23:57:23.314843 kernel: loop5: detected capacity change from 0 to 119832 Nov 7 23:57:23.324827 kernel: loop6: detected capacity change from 0 to 100624 Nov 7 23:57:23.334835 (sd-merge)[1280]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 7 23:57:23.337681 (sd-merge)[1280]: Merged extensions into '/usr'. Nov 7 23:57:23.341458 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Nov 7 23:57:23.341475 systemd[1]: Reloading... Nov 7 23:57:23.364846 systemd-resolved[1269]: Positive Trust Anchors: Nov 7 23:57:23.364868 systemd-resolved[1269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 7 23:57:23.364872 systemd-resolved[1269]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 7 23:57:23.364903 systemd-resolved[1269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 7 23:57:23.372415 systemd-resolved[1269]: Defaulting to hostname 'linux'. Nov 7 23:57:23.378944 zram_generator::config[1310]: No configuration found. Nov 7 23:57:23.557459 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 7 23:57:23.557598 systemd[1]: Reloading finished in 215 ms. Nov 7 23:57:23.586482 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 7 23:57:23.588253 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 7 23:57:23.591727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 7 23:57:23.610243 systemd[1]: Starting ensure-sysext.service... Nov 7 23:57:23.612712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 7 23:57:23.621712 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 7 23:57:23.628171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 7 23:57:23.628583 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 7 23:57:23.628616 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 7 23:57:23.628918 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 7 23:57:23.629126 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 7 23:57:23.629786 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 7 23:57:23.629937 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Nov 7 23:57:23.629947 systemd[1]: Reloading... Nov 7 23:57:23.630007 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Nov 7 23:57:23.630066 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Nov 7 23:57:23.634005 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Nov 7 23:57:23.634446 systemd-tmpfiles[1344]: Skipping /boot Nov 7 23:57:23.642056 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Nov 7 23:57:23.642074 systemd-tmpfiles[1344]: Skipping /boot Nov 7 23:57:23.660351 systemd-udevd[1347]: Using default interface naming scheme 'v257'. Nov 7 23:57:23.685248 zram_generator::config[1375]: No configuration found. Nov 7 23:57:23.884435 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 7 23:57:23.886214 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 7 23:57:23.886372 systemd[1]: Reloading finished in 256 ms. Nov 7 23:57:23.913940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 7 23:57:23.931958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 7 23:57:23.953064 systemd[1]: Finished ensure-sysext.service. Nov 7 23:57:23.969384 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 7 23:57:23.971917 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 7 23:57:23.973297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 7 23:57:23.996728 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 7 23:57:23.999497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 7 23:57:24.001958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 7 23:57:24.004949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 7 23:57:24.007498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 7 23:57:24.009215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 7 23:57:24.014840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 7 23:57:24.016886 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 7 23:57:24.018144 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 7 23:57:24.022460 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 7 23:57:24.026477 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 7 23:57:24.031686 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 7 23:57:24.040357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 7 23:57:24.045115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 7 23:57:24.045296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 7 23:57:24.047273 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 7 23:57:24.047747 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 7 23:57:24.052983 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 7 23:57:24.053253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 7 23:57:24.056290 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 7 23:57:24.056475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 7 23:57:24.063858 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 7 23:57:24.071116 augenrules[1489]: No rules Nov 7 23:57:24.073643 systemd[1]: audit-rules.service: Deactivated successfully. Nov 7 23:57:24.077033 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 7 23:57:24.079121 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 7 23:57:24.086775 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 7 23:57:24.087470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 7 23:57:24.092042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 7 23:57:24.094946 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 7 23:57:24.097314 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 7 23:57:24.099001 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 7 23:57:24.134736 systemd-networkd[1474]: lo: Link UP Nov 7 23:57:24.135115 systemd-networkd[1474]: lo: Gained carrier Nov 7 23:57:24.136246 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 7 23:57:24.136989 systemd-networkd[1474]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:57:24.137061 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 7 23:57:24.137846 systemd-networkd[1474]: eth0: Link UP Nov 7 23:57:24.137948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 7 23:57:24.138192 systemd-networkd[1474]: eth0: Gained carrier Nov 7 23:57:24.138263 systemd-networkd[1474]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 7 23:57:24.139670 systemd[1]: Reached target network.target - Network. Nov 7 23:57:24.140955 systemd[1]: Reached target time-set.target - System Time Set. Nov 7 23:57:24.143558 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 7 23:57:24.146425 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 7 23:57:24.159873 systemd-networkd[1474]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 7 23:57:24.161907 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Nov 7 23:57:23.722320 systemd-resolved[1269]: Clock change detected. Flushing caches. Nov 7 23:57:23.729470 systemd-journald[1217]: Time jumped backwards, rotating. Nov 7 23:57:23.722349 systemd-timesyncd[1475]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 7 23:57:23.722404 systemd-timesyncd[1475]: Initial clock synchronization to Fri 2025-11-07 23:57:23.722247 UTC. Nov 7 23:57:23.725718 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 7 23:57:23.853877 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 7 23:57:23.858794 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 7 23:57:23.861690 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 7 23:57:23.887918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 7 23:57:23.889541 systemd[1]: Reached target sysinit.target - System Initialization. Nov 7 23:57:23.890900 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 7 23:57:23.892467 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 7 23:57:23.894093 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 7 23:57:23.895524 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 7 23:57:23.896979 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 7 23:57:23.898352 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 7 23:57:23.898388 systemd[1]: Reached target paths.target - Path Units. Nov 7 23:57:23.899541 systemd[1]: Reached target timers.target - Timer Units. Nov 7 23:57:23.901938 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 7 23:57:23.904622 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 7 23:57:23.908207 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 7 23:57:23.909883 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 7 23:57:23.911366 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 7 23:57:23.914898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 7 23:57:23.916510 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 7 23:57:23.918601 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 7 23:57:23.919927 systemd[1]: Reached target sockets.target - Socket Units. Nov 7 23:57:23.921003 systemd[1]: Reached target basic.target - Basic System. Nov 7 23:57:23.922229 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 7 23:57:23.922264 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 7 23:57:23.923467 systemd[1]: Starting containerd.service - containerd container runtime... Nov 7 23:57:23.925717 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 7 23:57:23.927761 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 7 23:57:23.930017 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 7 23:57:23.932179 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 7 23:57:23.933350 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 7 23:57:23.934356 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 7 23:57:23.938047 jq[1524]: false Nov 7 23:57:23.938630 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 7 23:57:23.941290 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 7 23:57:23.944454 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 7 23:57:23.949296 extend-filesystems[1525]: Found /dev/vda6 Nov 7 23:57:23.950398 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 7 23:57:23.951691 extend-filesystems[1525]: Found /dev/vda9 Nov 7 23:57:23.951994 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 7 23:57:23.952588 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 7 23:57:23.953896 extend-filesystems[1525]: Checking size of /dev/vda9 Nov 7 23:57:23.953373 systemd[1]: Starting update-engine.service - Update Engine... Nov 7 23:57:23.960294 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 7 23:57:23.966225 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 7 23:57:23.967274 extend-filesystems[1525]: Resized partition /dev/vda9 Nov 7 23:57:23.969382 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 7 23:57:23.969612 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 7 23:57:23.969903 systemd[1]: motdgen.service: Deactivated successfully. Nov 7 23:57:23.969961 extend-filesystems[1551]: resize2fs 1.47.3 (8-Jul-2025) Nov 7 23:57:23.970088 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 7 23:57:23.976664 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 7 23:57:23.977398 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 7 23:57:23.977597 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 7 23:57:23.983408 jq[1544]: true Nov 7 23:57:23.994007 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 7 23:57:23.996118 update_engine[1539]: I20251107 23:57:23.995846 1539 main.cc:92] Flatcar Update Engine starting Nov 7 23:57:24.016752 jq[1565]: true Nov 7 23:57:24.028156 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 7 23:57:24.032340 dbus-daemon[1522]: [system] SELinux support is enabled Nov 7 23:57:24.033779 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 7 23:57:24.042502 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 7 23:57:24.051284 update_engine[1539]: I20251107 23:57:24.039202 1539 update_check_scheduler.cc:74] Next update check in 8m47s Nov 7 23:57:24.051312 tar[1553]: linux-arm64/LICENSE Nov 7 23:57:24.051312 tar[1553]: linux-arm64/helm Nov 7 23:57:24.042532 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 7 23:57:24.044218 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 7 23:57:24.044237 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 7 23:57:24.048044 systemd[1]: Started update-engine.service - Update Engine. Nov 7 23:57:24.051024 systemd-logind[1536]: Watching system buttons on /dev/input/event0 (Power Button) Nov 7 23:57:24.051387 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 7 23:57:24.052438 systemd-logind[1536]: New seat seat0. Nov 7 23:57:24.054792 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 7 23:57:24.054792 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 7 23:57:24.054792 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 7 23:57:24.062752 extend-filesystems[1525]: Resized filesystem in /dev/vda9 Nov 7 23:57:24.057663 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 7 23:57:24.058017 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 7 23:57:24.062388 systemd[1]: Started systemd-logind.service - User Login Management. Nov 7 23:57:24.066480 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Nov 7 23:57:24.071262 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 7 23:57:24.074077 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 7 23:57:24.133796 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 7 23:57:24.171143 containerd[1559]: time="2025-11-07T23:57:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 7 23:57:24.171671 containerd[1559]: time="2025-11-07T23:57:24.171632744Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 7 23:57:24.181144 containerd[1559]: time="2025-11-07T23:57:24.180995464Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.36µs" Nov 7 23:57:24.181144 containerd[1559]: time="2025-11-07T23:57:24.181045224Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 7 23:57:24.181144 containerd[1559]: time="2025-11-07T23:57:24.181080704Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 7 23:57:24.181304 containerd[1559]: time="2025-11-07T23:57:24.181276624Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 7 23:57:24.181343 containerd[1559]: time="2025-11-07T23:57:24.181304504Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 7 23:57:24.181343 containerd[1559]: time="2025-11-07T23:57:24.181336704Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.181404824Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.181426784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.181730584Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.181749304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.181762504Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.181770504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.181857304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.182047664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182146 containerd[1559]: time="2025-11-07T23:57:24.182120784Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 7 23:57:24.182292 containerd[1559]: time="2025-11-07T23:57:24.182134824Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 7 23:57:24.182292 containerd[1559]: time="2025-11-07T23:57:24.182198064Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 7 23:57:24.182615 containerd[1559]: time="2025-11-07T23:57:24.182592744Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 7 23:57:24.182685 containerd[1559]: time="2025-11-07T23:57:24.182667984Z" level=info msg="metadata content store policy set" policy=shared Nov 7 23:57:24.189233 containerd[1559]: time="2025-11-07T23:57:24.189190744Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 7 23:57:24.189285 containerd[1559]: time="2025-11-07T23:57:24.189270024Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 7 23:57:24.189308 containerd[1559]: time="2025-11-07T23:57:24.189291824Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 7 23:57:24.189342 containerd[1559]: time="2025-11-07T23:57:24.189306584Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 7 23:57:24.189366 containerd[1559]: time="2025-11-07T23:57:24.189345064Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 7 23:57:24.189366 containerd[1559]: time="2025-11-07T23:57:24.189357144Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 7 23:57:24.189411 containerd[1559]: time="2025-11-07T23:57:24.189371384Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 7 23:57:24.189411 containerd[1559]: time="2025-11-07T23:57:24.189383984Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 7 23:57:24.189411 containerd[1559]: time="2025-11-07T23:57:24.189397224Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 7 23:57:24.189411 containerd[1559]: time="2025-11-07T23:57:24.189407384Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 7 23:57:24.189474 containerd[1559]: time="2025-11-07T23:57:24.189417984Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 7 23:57:24.189474 containerd[1559]: time="2025-11-07T23:57:24.189431384Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 7 23:57:24.189601 containerd[1559]: time="2025-11-07T23:57:24.189580904Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 7 23:57:24.189627 containerd[1559]: time="2025-11-07T23:57:24.189608864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 7 23:57:24.189627 containerd[1559]: time="2025-11-07T23:57:24.189624144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 7 23:57:24.189678 containerd[1559]: time="2025-11-07T23:57:24.189637944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 7 23:57:24.189678 containerd[1559]: time="2025-11-07T23:57:24.189648904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 7 23:57:24.189713 containerd[1559]: time="2025-11-07T23:57:24.189683544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 7 23:57:24.189713 containerd[1559]: time="2025-11-07T23:57:24.189695904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 7 23:57:24.189713 containerd[1559]: time="2025-11-07T23:57:24.189706704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 7 23:57:24.189789 containerd[1559]: time="2025-11-07T23:57:24.189723344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 7 23:57:24.189789 containerd[1559]: time="2025-11-07T23:57:24.189735064Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 7 23:57:24.189789 containerd[1559]: time="2025-11-07T23:57:24.189746104Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 7 23:57:24.189997 containerd[1559]: time="2025-11-07T23:57:24.189940784Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 7 23:57:24.190023 containerd[1559]: time="2025-11-07T23:57:24.189999704Z" level=info msg="Start snapshots syncer" Nov 7 23:57:24.190041 containerd[1559]: time="2025-11-07T23:57:24.190026144Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 7 23:57:24.190341 containerd[1559]: time="2025-11-07T23:57:24.190293224Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 7 23:57:24.190453 containerd[1559]: time="2025-11-07T23:57:24.190344384Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 7 23:57:24.190453 containerd[1559]: time="2025-11-07T23:57:24.190389104Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 7 23:57:24.190519 containerd[1559]: time="2025-11-07T23:57:24.190498144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 7 23:57:24.190543 containerd[1559]: time="2025-11-07T23:57:24.190530104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 7 23:57:24.190560 containerd[1559]: time="2025-11-07T23:57:24.190543784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 7 23:57:24.190560 containerd[1559]: time="2025-11-07T23:57:24.190556784Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 7 23:57:24.190596 containerd[1559]: time="2025-11-07T23:57:24.190569224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 7 23:57:24.190596 containerd[1559]: time="2025-11-07T23:57:24.190580824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 7 23:57:24.190596 containerd[1559]: time="2025-11-07T23:57:24.190592024Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 7 23:57:24.190641 containerd[1559]: time="2025-11-07T23:57:24.190616464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 7 23:57:24.190659 containerd[1559]: time="2025-11-07T23:57:24.190643904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 7 23:57:24.190659 containerd[1559]: time="2025-11-07T23:57:24.190655624Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 7 23:57:24.190963 containerd[1559]: time="2025-11-07T23:57:24.190688144Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 7 23:57:24.190963 containerd[1559]: time="2025-11-07T23:57:24.190708384Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 7 23:57:24.190963 containerd[1559]: time="2025-11-07T23:57:24.190717824Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 7 23:57:24.190963 containerd[1559]: time="2025-11-07T23:57:24.190726824Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 7 23:57:24.190963 containerd[1559]: time="2025-11-07T23:57:24.190803504Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 7 23:57:24.190963 containerd[1559]: time="2025-11-07T23:57:24.190817104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 7 23:57:24.190963 containerd[1559]: time="2025-11-07T23:57:24.190828744Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 7 23:57:24.191107 containerd[1559]: time="2025-11-07T23:57:24.191055664Z" level=info msg="runtime interface created" Nov 7 23:57:24.191107 containerd[1559]: time="2025-11-07T23:57:24.191071304Z" level=info msg="created NRI interface" Nov 7 23:57:24.191107 containerd[1559]: time="2025-11-07T23:57:24.191080744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 7 23:57:24.191107 containerd[1559]: time="2025-11-07T23:57:24.191095024Z" level=info msg="Connect containerd service" Nov 7 23:57:24.191193 containerd[1559]: time="2025-11-07T23:57:24.191116344Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 7 23:57:24.191997 containerd[1559]: time="2025-11-07T23:57:24.191956184Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261519224Z" level=info msg="Start subscribing containerd event" Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261604824Z" level=info msg="Start recovering state" Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261707504Z" level=info msg="Start event monitor" Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261720144Z" level=info msg="Start cni network conf syncer for default" Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261733664Z" level=info msg="Start streaming server" Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261743144Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261751584Z" level=info msg="runtime interface starting up..." Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261757464Z" level=info msg="starting plugins..." Nov 7 23:57:24.261969 containerd[1559]: time="2025-11-07T23:57:24.261771144Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 7 23:57:24.262258 containerd[1559]: time="2025-11-07T23:57:24.262005744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 7 23:57:24.262258 containerd[1559]: time="2025-11-07T23:57:24.262072504Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 7 23:57:24.264362 systemd[1]: Started containerd.service - containerd container runtime. Nov 7 23:57:24.268549 containerd[1559]: time="2025-11-07T23:57:24.268476024Z" level=info msg="containerd successfully booted in 0.097938s" Nov 7 23:57:24.353307 tar[1553]: linux-arm64/README.md Nov 7 23:57:24.372995 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 7 23:57:24.587013 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 7 23:57:24.606849 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 7 23:57:24.610025 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 7 23:57:24.629830 systemd[1]: issuegen.service: Deactivated successfully. Nov 7 23:57:24.630093 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 7 23:57:24.633025 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 7 23:57:24.668375 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 7 23:57:24.671464 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 7 23:57:24.673892 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 7 23:57:24.675560 systemd[1]: Reached target getty.target - Login Prompts. Nov 7 23:57:25.582276 systemd-networkd[1474]: eth0: Gained IPv6LL Nov 7 23:57:25.584555 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 7 23:57:25.586443 systemd[1]: Reached target network-online.target - Network is Online. Nov 7 23:57:25.589007 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 7 23:57:25.591638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:57:25.605153 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 7 23:57:25.629675 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 7 23:57:25.632549 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 7 23:57:25.634467 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 7 23:57:25.637865 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 7 23:57:26.187734 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 23:57:26.188076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:26.190641 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 7 23:57:26.192027 systemd[1]: Startup finished in 1.247s (kernel) + 5.252s (initrd) + 4.316s (userspace) = 10.816s. Nov 7 23:57:26.567827 kubelet[1661]: E1107 23:57:26.567769 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 23:57:26.570284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 23:57:26.571413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 23:57:26.571779 systemd[1]: kubelet.service: Consumed 769ms CPU time, 259.5M memory peak. Nov 7 23:57:28.599802 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 7 23:57:28.601118 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:56050.service - OpenSSH per-connection server daemon (10.0.0.1:56050). Nov 7 23:57:28.665049 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 56050 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:57:28.666981 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:57:28.679337 systemd-logind[1536]: New session 1 of user core. Nov 7 23:57:28.680268 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 7 23:57:28.681279 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 7 23:57:28.708371 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 7 23:57:28.710962 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 7 23:57:28.730804 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 7 23:57:28.733694 systemd-logind[1536]: New session c1 of user core. Nov 7 23:57:28.845805 systemd[1680]: Queued start job for default target default.target. Nov 7 23:57:28.865184 systemd[1680]: Created slice app.slice - User Application Slice. Nov 7 23:57:28.865230 systemd[1680]: Reached target paths.target - Paths. Nov 7 23:57:28.865270 systemd[1680]: Reached target timers.target - Timers. Nov 7 23:57:28.866468 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 7 23:57:28.877211 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 7 23:57:28.877368 systemd[1680]: Reached target sockets.target - Sockets. Nov 7 23:57:28.877416 systemd[1680]: Reached target basic.target - Basic System. Nov 7 23:57:28.877444 systemd[1680]: Reached target default.target - Main User Target. Nov 7 23:57:28.877470 systemd[1680]: Startup finished in 137ms. Nov 7 23:57:28.877674 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 7 23:57:28.879298 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 7 23:57:28.946665 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:56052.service - OpenSSH per-connection server daemon (10.0.0.1:56052). Nov 7 23:57:29.009498 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 56052 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:57:29.010995 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:57:29.015201 systemd-logind[1536]: New session 2 of user core. Nov 7 23:57:29.025381 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 7 23:57:29.078093 sshd[1694]: Connection closed by 10.0.0.1 port 56052 Nov 7 23:57:29.080228 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Nov 7 23:57:29.088646 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:56052.service: Deactivated successfully. Nov 7 23:57:29.091624 systemd[1]: session-2.scope: Deactivated successfully. Nov 7 23:57:29.092453 systemd-logind[1536]: Session 2 logged out. Waiting for processes to exit. Nov 7 23:57:29.094896 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:56068.service - OpenSSH per-connection server daemon (10.0.0.1:56068). Nov 7 23:57:29.096223 systemd-logind[1536]: Removed session 2. Nov 7 23:57:29.164946 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 56068 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:57:29.166411 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:57:29.172112 systemd-logind[1536]: New session 3 of user core. Nov 7 23:57:29.176331 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 7 23:57:29.225240 sshd[1703]: Connection closed by 10.0.0.1 port 56068 Nov 7 23:57:29.226365 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Nov 7 23:57:29.247020 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:56068.service: Deactivated successfully. Nov 7 23:57:29.250740 systemd[1]: session-3.scope: Deactivated successfully. Nov 7 23:57:29.251696 systemd-logind[1536]: Session 3 logged out. Waiting for processes to exit. Nov 7 23:57:29.254408 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:49940.service - OpenSSH per-connection server daemon (10.0.0.1:49940). Nov 7 23:57:29.256774 systemd-logind[1536]: Removed session 3. Nov 7 23:57:29.312345 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 49940 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:57:29.313545 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:57:29.319028 systemd-logind[1536]: New session 4 of user core. Nov 7 23:57:29.329372 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 7 23:57:29.384281 sshd[1713]: Connection closed by 10.0.0.1 port 49940 Nov 7 23:57:29.384643 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Nov 7 23:57:29.405002 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:49940.service: Deactivated successfully. Nov 7 23:57:29.406812 systemd[1]: session-4.scope: Deactivated successfully. Nov 7 23:57:29.408834 systemd-logind[1536]: Session 4 logged out. Waiting for processes to exit. Nov 7 23:57:29.411834 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:49942.service - OpenSSH per-connection server daemon (10.0.0.1:49942). Nov 7 23:57:29.415264 systemd-logind[1536]: Removed session 4. Nov 7 23:57:29.492409 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 49942 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:57:29.496734 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:57:29.502428 systemd-logind[1536]: New session 5 of user core. Nov 7 23:57:29.516383 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 7 23:57:29.578935 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 7 23:57:29.579264 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:57:29.599678 sudo[1723]: pam_unix(sudo:session): session closed for user root Nov 7 23:57:29.601657 sshd[1722]: Connection closed by 10.0.0.1 port 49942 Nov 7 23:57:29.603410 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Nov 7 23:57:29.622881 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:49942.service: Deactivated successfully. Nov 7 23:57:29.625038 systemd[1]: session-5.scope: Deactivated successfully. Nov 7 23:57:29.626020 systemd-logind[1536]: Session 5 logged out. Waiting for processes to exit. Nov 7 23:57:29.629985 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:49946.service - OpenSSH per-connection server daemon (10.0.0.1:49946). Nov 7 23:57:29.630663 systemd-logind[1536]: Removed session 5. Nov 7 23:57:29.703528 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 49946 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:57:29.704964 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:57:29.710218 systemd-logind[1536]: New session 6 of user core. Nov 7 23:57:29.724458 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 7 23:57:29.777267 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 7 23:57:29.777529 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:57:29.782678 sudo[1734]: pam_unix(sudo:session): session closed for user root Nov 7 23:57:29.788818 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 7 23:57:29.789389 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:57:29.802896 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 7 23:57:29.856753 augenrules[1756]: No rules Nov 7 23:57:29.858316 systemd[1]: audit-rules.service: Deactivated successfully. Nov 7 23:57:29.858538 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 7 23:57:29.860368 sudo[1733]: pam_unix(sudo:session): session closed for user root Nov 7 23:57:29.862369 sshd[1732]: Connection closed by 10.0.0.1 port 49946 Nov 7 23:57:29.862745 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Nov 7 23:57:29.874670 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:49946.service: Deactivated successfully. Nov 7 23:57:29.876834 systemd[1]: session-6.scope: Deactivated successfully. Nov 7 23:57:29.877691 systemd-logind[1536]: Session 6 logged out. Waiting for processes to exit. Nov 7 23:57:29.879827 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:49952.service - OpenSSH per-connection server daemon (10.0.0.1:49952). Nov 7 23:57:29.880685 systemd-logind[1536]: Removed session 6. Nov 7 23:57:29.952965 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 49952 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:57:29.954641 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:57:29.960438 systemd-logind[1536]: New session 7 of user core. Nov 7 23:57:29.970362 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 7 23:57:30.027979 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 7 23:57:30.028312 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 7 23:57:30.336972 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 7 23:57:30.356472 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 7 23:57:30.588292 dockerd[1789]: time="2025-11-07T23:57:30.586384104Z" level=info msg="Starting up" Nov 7 23:57:30.588981 dockerd[1789]: time="2025-11-07T23:57:30.588945104Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 7 23:57:30.601921 dockerd[1789]: time="2025-11-07T23:57:30.601869624Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 7 23:57:30.622974 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4003365688-merged.mount: Deactivated successfully. Nov 7 23:57:30.766888 dockerd[1789]: time="2025-11-07T23:57:30.766833904Z" level=info msg="Loading containers: start." Nov 7 23:57:30.779179 kernel: Initializing XFRM netlink socket Nov 7 23:57:31.065135 systemd-networkd[1474]: docker0: Link UP Nov 7 23:57:31.069053 dockerd[1789]: time="2025-11-07T23:57:31.068945744Z" level=info msg="Loading containers: done." Nov 7 23:57:31.086316 dockerd[1789]: time="2025-11-07T23:57:31.086263184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 7 23:57:31.086460 dockerd[1789]: time="2025-11-07T23:57:31.086358744Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 7 23:57:31.086460 dockerd[1789]: time="2025-11-07T23:57:31.086446744Z" level=info msg="Initializing buildkit" Nov 7 23:57:31.113601 dockerd[1789]: time="2025-11-07T23:57:31.113561464Z" level=info msg="Completed buildkit initialization" Nov 7 23:57:31.118942 dockerd[1789]: time="2025-11-07T23:57:31.118885984Z" level=info msg="Daemon has completed initialization" Nov 7 23:57:31.119561 dockerd[1789]: time="2025-11-07T23:57:31.118950504Z" level=info msg="API listen on /run/docker.sock" Nov 7 23:57:31.119221 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 7 23:57:31.617362 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1729928151-merged.mount: Deactivated successfully. Nov 7 23:57:31.966194 containerd[1559]: time="2025-11-07T23:57:31.966053544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 7 23:57:32.587466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647966336.mount: Deactivated successfully. Nov 7 23:57:33.763640 containerd[1559]: time="2025-11-07T23:57:33.763565304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:33.764187 containerd[1559]: time="2025-11-07T23:57:33.764155384Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Nov 7 23:57:33.765301 containerd[1559]: time="2025-11-07T23:57:33.765263864Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:33.767855 containerd[1559]: time="2025-11-07T23:57:33.767796904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:33.769070 containerd[1559]: time="2025-11-07T23:57:33.768881544Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.80276556s" Nov 7 23:57:33.769070 containerd[1559]: time="2025-11-07T23:57:33.768919304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 7 23:57:33.770233 containerd[1559]: time="2025-11-07T23:57:33.770207664Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 7 23:57:35.831194 containerd[1559]: time="2025-11-07T23:57:35.831105664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:35.831760 containerd[1559]: time="2025-11-07T23:57:35.831724824Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Nov 7 23:57:35.832782 containerd[1559]: time="2025-11-07T23:57:35.832746344Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:35.835928 containerd[1559]: time="2025-11-07T23:57:35.835886704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:35.837427 containerd[1559]: time="2025-11-07T23:57:35.837385384Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 2.06714332s" Nov 7 23:57:35.837463 containerd[1559]: time="2025-11-07T23:57:35.837425424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 7 23:57:35.838011 containerd[1559]: time="2025-11-07T23:57:35.837811144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 7 23:57:36.821039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 7 23:57:36.822453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:57:37.003750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:37.008407 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 23:57:37.152564 containerd[1559]: time="2025-11-07T23:57:37.152257744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:37.154361 containerd[1559]: time="2025-11-07T23:57:37.154046144Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Nov 7 23:57:37.155720 containerd[1559]: time="2025-11-07T23:57:37.155684304Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:37.160582 containerd[1559]: time="2025-11-07T23:57:37.160542864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:37.162118 containerd[1559]: time="2025-11-07T23:57:37.161808824Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.3239668s" Nov 7 23:57:37.162118 containerd[1559]: time="2025-11-07T23:57:37.161849744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 7 23:57:37.162498 containerd[1559]: time="2025-11-07T23:57:37.162431504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 7 23:57:37.163044 kubelet[2083]: E1107 23:57:37.162995 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 23:57:37.166116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 23:57:37.166263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 23:57:37.166784 systemd[1]: kubelet.service: Consumed 156ms CPU time, 106.9M memory peak. Nov 7 23:57:38.188482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675002268.mount: Deactivated successfully. Nov 7 23:57:38.627168 containerd[1559]: time="2025-11-07T23:57:38.626706864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:38.628182 containerd[1559]: time="2025-11-07T23:57:38.628133784Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Nov 7 23:57:38.629805 containerd[1559]: time="2025-11-07T23:57:38.629730584Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:38.633367 containerd[1559]: time="2025-11-07T23:57:38.633332904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:38.633999 containerd[1559]: time="2025-11-07T23:57:38.633882544Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.47137708s" Nov 7 23:57:38.633999 containerd[1559]: time="2025-11-07T23:57:38.633916304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 7 23:57:38.635282 containerd[1559]: time="2025-11-07T23:57:38.635254824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 7 23:57:39.146999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735894313.mount: Deactivated successfully. Nov 7 23:57:40.030180 containerd[1559]: time="2025-11-07T23:57:40.030100984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:40.030927 containerd[1559]: time="2025-11-07T23:57:40.030868504Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Nov 7 23:57:40.031907 containerd[1559]: time="2025-11-07T23:57:40.031877744Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:40.035612 containerd[1559]: time="2025-11-07T23:57:40.035578624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:40.037698 containerd[1559]: time="2025-11-07T23:57:40.037667944Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.40237776s" Nov 7 23:57:40.037805 containerd[1559]: time="2025-11-07T23:57:40.037790744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 7 23:57:40.038417 containerd[1559]: time="2025-11-07T23:57:40.038382624Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 7 23:57:40.480425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475481012.mount: Deactivated successfully. Nov 7 23:57:40.491864 containerd[1559]: time="2025-11-07T23:57:40.491788064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:57:40.494213 containerd[1559]: time="2025-11-07T23:57:40.494171064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 7 23:57:40.494545 containerd[1559]: time="2025-11-07T23:57:40.494500504Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:57:40.496863 containerd[1559]: time="2025-11-07T23:57:40.496819904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 7 23:57:40.498530 containerd[1559]: time="2025-11-07T23:57:40.498460944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 460.05128ms" Nov 7 23:57:40.498530 containerd[1559]: time="2025-11-07T23:57:40.498493984Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 7 23:57:40.499036 containerd[1559]: time="2025-11-07T23:57:40.499006984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 7 23:57:40.973476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765788626.mount: Deactivated successfully. Nov 7 23:57:43.130981 containerd[1559]: time="2025-11-07T23:57:43.130909384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:43.139222 containerd[1559]: time="2025-11-07T23:57:43.139167304Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Nov 7 23:57:43.140650 containerd[1559]: time="2025-11-07T23:57:43.140593704Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:43.144180 containerd[1559]: time="2025-11-07T23:57:43.144117144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:57:43.144831 containerd[1559]: time="2025-11-07T23:57:43.144781024Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.64574752s" Nov 7 23:57:43.144831 containerd[1559]: time="2025-11-07T23:57:43.144812824Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 7 23:57:47.416678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 7 23:57:47.418796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:57:47.611211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:47.627451 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 7 23:57:47.671256 kubelet[2244]: E1107 23:57:47.671105 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 7 23:57:47.673896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 7 23:57:47.674046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 7 23:57:47.674649 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.3M memory peak. Nov 7 23:57:49.120510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:49.120790 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.3M memory peak. Nov 7 23:57:49.123181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:57:49.148066 systemd[1]: Reload requested from client PID 2261 ('systemctl') (unit session-7.scope)... Nov 7 23:57:49.148083 systemd[1]: Reloading... Nov 7 23:57:49.225165 zram_generator::config[2308]: No configuration found. Nov 7 23:57:49.484164 systemd[1]: Reloading finished in 335 ms. Nov 7 23:57:49.557812 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 7 23:57:49.557907 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 7 23:57:49.558232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:49.558282 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95.1M memory peak. Nov 7 23:57:49.560290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:57:49.687219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:49.695170 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 7 23:57:49.738090 kubelet[2350]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 23:57:49.738090 kubelet[2350]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 7 23:57:49.738090 kubelet[2350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 23:57:49.738090 kubelet[2350]: I1107 23:57:49.738068 2350 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 7 23:57:50.658133 kubelet[2350]: I1107 23:57:50.656595 2350 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 7 23:57:50.658133 kubelet[2350]: I1107 23:57:50.656626 2350 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 7 23:57:50.658133 kubelet[2350]: I1107 23:57:50.656838 2350 server.go:956] "Client rotation is on, will bootstrap in background" Nov 7 23:57:50.689380 kubelet[2350]: E1107 23:57:50.689323 2350 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 7 23:57:50.690864 kubelet[2350]: I1107 23:57:50.690821 2350 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 7 23:57:50.699695 kubelet[2350]: I1107 23:57:50.699667 2350 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 7 23:57:50.702333 kubelet[2350]: I1107 23:57:50.702300 2350 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 7 23:57:50.703588 kubelet[2350]: I1107 23:57:50.703504 2350 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 7 23:57:50.703804 kubelet[2350]: I1107 23:57:50.703583 2350 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 7 23:57:50.703903 kubelet[2350]: I1107 23:57:50.703875 2350 topology_manager.go:138] "Creating topology manager with none policy" Nov 7 23:57:50.703903 kubelet[2350]: I1107 23:57:50.703886 2350 container_manager_linux.go:303] "Creating device plugin manager" Nov 7 23:57:50.704203 kubelet[2350]: I1107 23:57:50.704129 2350 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:57:50.707134 kubelet[2350]: I1107 23:57:50.707098 2350 kubelet.go:480] "Attempting to sync node with API server" Nov 7 23:57:50.707134 kubelet[2350]: I1107 23:57:50.707130 2350 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 7 23:57:50.707218 kubelet[2350]: I1107 23:57:50.707212 2350 kubelet.go:386] "Adding apiserver pod source" Nov 7 23:57:50.710841 kubelet[2350]: I1107 23:57:50.708531 2350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 7 23:57:50.710841 kubelet[2350]: I1107 23:57:50.709852 2350 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 7 23:57:50.710841 kubelet[2350]: I1107 23:57:50.710757 2350 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 7 23:57:50.710841 kubelet[2350]: E1107 23:57:50.710792 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 7 23:57:50.711012 kubelet[2350]: W1107 23:57:50.710938 2350 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 7 23:57:50.711116 kubelet[2350]: E1107 23:57:50.711088 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 7 23:57:50.713653 kubelet[2350]: I1107 23:57:50.713626 2350 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 7 23:57:50.713703 kubelet[2350]: I1107 23:57:50.713672 2350 server.go:1289] "Started kubelet" Nov 7 23:57:50.714658 kubelet[2350]: I1107 23:57:50.714568 2350 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 7 23:57:50.716935 kubelet[2350]: I1107 23:57:50.716762 2350 server.go:317] "Adding debug handlers to kubelet server" Nov 7 23:57:50.718945 kubelet[2350]: I1107 23:57:50.718917 2350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 7 23:57:50.719288 kubelet[2350]: I1107 23:57:50.719225 2350 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 7 23:57:50.719986 kubelet[2350]: I1107 23:57:50.719842 2350 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 7 23:57:50.725113 kubelet[2350]: E1107 23:57:50.716487 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875dedd2d3ebb88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-07 23:57:50.713641864 +0000 UTC m=+1.010937001,LastTimestamp:2025-11-07 23:57:50.713641864 +0000 UTC m=+1.010937001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 7 23:57:50.725113 kubelet[2350]: I1107 23:57:50.720815 2350 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 7 23:57:50.725113 kubelet[2350]: E1107 23:57:50.722519 2350 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:57:50.725113 kubelet[2350]: I1107 23:57:50.722587 2350 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 7 23:57:50.725113 kubelet[2350]: I1107 23:57:50.723444 2350 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 7 23:57:50.725113 kubelet[2350]: I1107 23:57:50.723513 2350 reconciler.go:26] "Reconciler: start to sync state" Nov 7 23:57:50.725113 kubelet[2350]: E1107 23:57:50.723592 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" Nov 7 23:57:50.725113 kubelet[2350]: I1107 23:57:50.723959 2350 factory.go:223] Registration of the systemd container factory successfully Nov 7 23:57:50.725418 kubelet[2350]: E1107 23:57:50.724005 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 7 23:57:50.725418 kubelet[2350]: I1107 23:57:50.724064 2350 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 7 23:57:50.727265 kubelet[2350]: E1107 23:57:50.727221 2350 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 7 23:57:50.727265 kubelet[2350]: I1107 23:57:50.727240 2350 factory.go:223] Registration of the containerd container factory successfully Nov 7 23:57:50.739705 kubelet[2350]: I1107 23:57:50.739644 2350 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 7 23:57:50.739705 kubelet[2350]: I1107 23:57:50.739669 2350 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 7 23:57:50.739705 kubelet[2350]: I1107 23:57:50.739688 2350 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:57:50.750128 kubelet[2350]: I1107 23:57:50.749934 2350 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 7 23:57:50.751234 kubelet[2350]: I1107 23:57:50.751210 2350 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 7 23:57:50.751234 kubelet[2350]: I1107 23:57:50.751237 2350 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 7 23:57:50.751446 kubelet[2350]: I1107 23:57:50.751261 2350 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 7 23:57:50.751446 kubelet[2350]: I1107 23:57:50.751268 2350 kubelet.go:2436] "Starting kubelet main sync loop" Nov 7 23:57:50.751446 kubelet[2350]: E1107 23:57:50.751311 2350 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 7 23:57:50.753230 kubelet[2350]: E1107 23:57:50.753186 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 7 23:57:50.822769 kubelet[2350]: E1107 23:57:50.822699 2350 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:57:50.851999 kubelet[2350]: E1107 23:57:50.851948 2350 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 7 23:57:50.877260 kubelet[2350]: I1107 23:57:50.877215 2350 policy_none.go:49] "None policy: Start" Nov 7 23:57:50.877260 kubelet[2350]: I1107 23:57:50.877247 2350 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 7 23:57:50.877260 kubelet[2350]: I1107 23:57:50.877259 2350 state_mem.go:35] "Initializing new in-memory state store" Nov 7 23:57:50.923380 kubelet[2350]: E1107 23:57:50.923265 2350 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:57:50.924882 kubelet[2350]: E1107 23:57:50.924833 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" Nov 7 23:57:50.930881 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 7 23:57:50.941591 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 7 23:57:50.944484 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 7 23:57:50.958041 kubelet[2350]: E1107 23:57:50.958011 2350 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 7 23:57:50.958466 kubelet[2350]: I1107 23:57:50.958437 2350 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 7 23:57:50.958515 kubelet[2350]: I1107 23:57:50.958454 2350 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 7 23:57:50.959086 kubelet[2350]: I1107 23:57:50.958877 2350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 7 23:57:50.960769 kubelet[2350]: E1107 23:57:50.960733 2350 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 7 23:57:50.960825 kubelet[2350]: E1107 23:57:50.960780 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 7 23:57:51.060194 kubelet[2350]: I1107 23:57:51.060163 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:57:51.061269 kubelet[2350]: E1107 23:57:51.061242 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Nov 7 23:57:51.063402 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 7 23:57:51.081440 kubelet[2350]: E1107 23:57:51.081412 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:51.084481 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 7 23:57:51.086569 kubelet[2350]: E1107 23:57:51.086391 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:51.088836 systemd[1]: Created slice kubepods-burstable-pod0e74ed79fe85e061b1c9591574419c52.slice - libcontainer container kubepods-burstable-pod0e74ed79fe85e061b1c9591574419c52.slice. Nov 7 23:57:51.090499 kubelet[2350]: E1107 23:57:51.090343 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:51.124770 kubelet[2350]: I1107 23:57:51.124737 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e74ed79fe85e061b1c9591574419c52-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0e74ed79fe85e061b1c9591574419c52\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:51.124770 kubelet[2350]: I1107 23:57:51.124774 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:51.124871 kubelet[2350]: I1107 23:57:51.124793 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:51.124871 kubelet[2350]: I1107 23:57:51.124809 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:51.124871 kubelet[2350]: I1107 23:57:51.124823 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:51.124871 kubelet[2350]: I1107 23:57:51.124837 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:51.124871 kubelet[2350]: I1107 23:57:51.124850 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 7 23:57:51.124991 kubelet[2350]: I1107 23:57:51.124862 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e74ed79fe85e061b1c9591574419c52-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0e74ed79fe85e061b1c9591574419c52\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:51.124991 kubelet[2350]: I1107 23:57:51.124876 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e74ed79fe85e061b1c9591574419c52-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0e74ed79fe85e061b1c9591574419c52\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:51.262938 kubelet[2350]: I1107 23:57:51.262836 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:57:51.263307 kubelet[2350]: E1107 23:57:51.263184 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Nov 7 23:57:51.325937 kubelet[2350]: E1107 23:57:51.325887 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" Nov 7 23:57:51.382297 kubelet[2350]: E1107 23:57:51.382257 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.383643 containerd[1559]: time="2025-11-07T23:57:51.382984264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 7 23:57:51.387383 kubelet[2350]: E1107 23:57:51.387348 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.387766 containerd[1559]: time="2025-11-07T23:57:51.387734424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 7 23:57:51.392069 kubelet[2350]: E1107 23:57:51.391336 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.392222 containerd[1559]: time="2025-11-07T23:57:51.391842064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0e74ed79fe85e061b1c9591574419c52,Namespace:kube-system,Attempt:0,}" Nov 7 23:57:51.404900 containerd[1559]: time="2025-11-07T23:57:51.404836104Z" level=info msg="connecting to shim 86159d34cd7d4119745c1176ba0ed1b9324e7f5fbb0427a180733140b2aa9bbf" address="unix:///run/containerd/s/1b3a72421f8c16132ab5e7f54f18c511a931a9b9e700e5e7816e43e90d61dce0" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:57:51.423485 containerd[1559]: time="2025-11-07T23:57:51.423309784Z" level=info msg="connecting to shim 20fdc304c7d3d55c9510b83dbbfcbe7a1a6add43518496723b5ccd103858f347" address="unix:///run/containerd/s/bafe42ff88d630e8d232abb82f446f212007484e5392b6878622d2b11fe73f94" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:57:51.423647 containerd[1559]: time="2025-11-07T23:57:51.423330704Z" level=info msg="connecting to shim 5106a4b62c1ca6e2252bd930751653c9c91b22b6ef17ee2eb1d2e24c4e170645" address="unix:///run/containerd/s/669b32a3a022e854b132d359f6244b812871b7d115ed9f6594ac2b3e6891868c" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:57:51.449440 systemd[1]: Started cri-containerd-86159d34cd7d4119745c1176ba0ed1b9324e7f5fbb0427a180733140b2aa9bbf.scope - libcontainer container 86159d34cd7d4119745c1176ba0ed1b9324e7f5fbb0427a180733140b2aa9bbf. Nov 7 23:57:51.453556 systemd[1]: Started cri-containerd-20fdc304c7d3d55c9510b83dbbfcbe7a1a6add43518496723b5ccd103858f347.scope - libcontainer container 20fdc304c7d3d55c9510b83dbbfcbe7a1a6add43518496723b5ccd103858f347. Nov 7 23:57:51.454702 systemd[1]: Started cri-containerd-5106a4b62c1ca6e2252bd930751653c9c91b22b6ef17ee2eb1d2e24c4e170645.scope - libcontainer container 5106a4b62c1ca6e2252bd930751653c9c91b22b6ef17ee2eb1d2e24c4e170645. Nov 7 23:57:51.509860 containerd[1559]: time="2025-11-07T23:57:51.509783264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0e74ed79fe85e061b1c9591574419c52,Namespace:kube-system,Attempt:0,} returns sandbox id \"5106a4b62c1ca6e2252bd930751653c9c91b22b6ef17ee2eb1d2e24c4e170645\"" Nov 7 23:57:51.510988 kubelet[2350]: E1107 23:57:51.510962 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.513334 containerd[1559]: time="2025-11-07T23:57:51.512979824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"86159d34cd7d4119745c1176ba0ed1b9324e7f5fbb0427a180733140b2aa9bbf\"" Nov 7 23:57:51.513905 kubelet[2350]: E1107 23:57:51.513633 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.513961 containerd[1559]: time="2025-11-07T23:57:51.513922704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"20fdc304c7d3d55c9510b83dbbfcbe7a1a6add43518496723b5ccd103858f347\"" Nov 7 23:57:51.514843 kubelet[2350]: E1107 23:57:51.514821 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.515408 containerd[1559]: time="2025-11-07T23:57:51.515350824Z" level=info msg="CreateContainer within sandbox \"5106a4b62c1ca6e2252bd930751653c9c91b22b6ef17ee2eb1d2e24c4e170645\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 7 23:57:51.517277 containerd[1559]: time="2025-11-07T23:57:51.517167664Z" level=info msg="CreateContainer within sandbox \"86159d34cd7d4119745c1176ba0ed1b9324e7f5fbb0427a180733140b2aa9bbf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 7 23:57:51.523272 containerd[1559]: time="2025-11-07T23:57:51.523235184Z" level=info msg="Container 9c92223f6ecb4f222ad015eb3532d98b12d97603f7b1e45969ce20de66dfedf9: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:57:51.531733 containerd[1559]: time="2025-11-07T23:57:51.531693344Z" level=info msg="CreateContainer within sandbox \"20fdc304c7d3d55c9510b83dbbfcbe7a1a6add43518496723b5ccd103858f347\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 7 23:57:51.550435 containerd[1559]: time="2025-11-07T23:57:51.550388264Z" level=info msg="Container 4e818aa4a29603f4ffd5a3a0b987fb1f039b9e5aa4c10a19401a98ffa3d868b4: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:57:51.558054 containerd[1559]: time="2025-11-07T23:57:51.557992264Z" level=info msg="CreateContainer within sandbox \"5106a4b62c1ca6e2252bd930751653c9c91b22b6ef17ee2eb1d2e24c4e170645\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9c92223f6ecb4f222ad015eb3532d98b12d97603f7b1e45969ce20de66dfedf9\"" Nov 7 23:57:51.558675 containerd[1559]: time="2025-11-07T23:57:51.558634144Z" level=info msg="CreateContainer within sandbox \"86159d34cd7d4119745c1176ba0ed1b9324e7f5fbb0427a180733140b2aa9bbf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4e818aa4a29603f4ffd5a3a0b987fb1f039b9e5aa4c10a19401a98ffa3d868b4\"" Nov 7 23:57:51.559026 containerd[1559]: time="2025-11-07T23:57:51.558998464Z" level=info msg="StartContainer for \"4e818aa4a29603f4ffd5a3a0b987fb1f039b9e5aa4c10a19401a98ffa3d868b4\"" Nov 7 23:57:51.559114 containerd[1559]: time="2025-11-07T23:57:51.558999584Z" level=info msg="StartContainer for \"9c92223f6ecb4f222ad015eb3532d98b12d97603f7b1e45969ce20de66dfedf9\"" Nov 7 23:57:51.560355 containerd[1559]: time="2025-11-07T23:57:51.560327184Z" level=info msg="connecting to shim 4e818aa4a29603f4ffd5a3a0b987fb1f039b9e5aa4c10a19401a98ffa3d868b4" address="unix:///run/containerd/s/1b3a72421f8c16132ab5e7f54f18c511a931a9b9e700e5e7816e43e90d61dce0" protocol=ttrpc version=3 Nov 7 23:57:51.560473 containerd[1559]: time="2025-11-07T23:57:51.560337184Z" level=info msg="connecting to shim 9c92223f6ecb4f222ad015eb3532d98b12d97603f7b1e45969ce20de66dfedf9" address="unix:///run/containerd/s/669b32a3a022e854b132d359f6244b812871b7d115ed9f6594ac2b3e6891868c" protocol=ttrpc version=3 Nov 7 23:57:51.562433 containerd[1559]: time="2025-11-07T23:57:51.562393184Z" level=info msg="Container 219a8d9116c39f4848b3d68ec51451fc02b7e2d0b7a2c504422bd0ca1cd0a626: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:57:51.572647 containerd[1559]: time="2025-11-07T23:57:51.572605024Z" level=info msg="CreateContainer within sandbox \"20fdc304c7d3d55c9510b83dbbfcbe7a1a6add43518496723b5ccd103858f347\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"219a8d9116c39f4848b3d68ec51451fc02b7e2d0b7a2c504422bd0ca1cd0a626\"" Nov 7 23:57:51.574309 containerd[1559]: time="2025-11-07T23:57:51.574277304Z" level=info msg="StartContainer for \"219a8d9116c39f4848b3d68ec51451fc02b7e2d0b7a2c504422bd0ca1cd0a626\"" Nov 7 23:57:51.576386 containerd[1559]: time="2025-11-07T23:57:51.576352784Z" level=info msg="connecting to shim 219a8d9116c39f4848b3d68ec51451fc02b7e2d0b7a2c504422bd0ca1cd0a626" address="unix:///run/containerd/s/bafe42ff88d630e8d232abb82f446f212007484e5392b6878622d2b11fe73f94" protocol=ttrpc version=3 Nov 7 23:57:51.584431 systemd[1]: Started cri-containerd-9c92223f6ecb4f222ad015eb3532d98b12d97603f7b1e45969ce20de66dfedf9.scope - libcontainer container 9c92223f6ecb4f222ad015eb3532d98b12d97603f7b1e45969ce20de66dfedf9. Nov 7 23:57:51.587753 systemd[1]: Started cri-containerd-4e818aa4a29603f4ffd5a3a0b987fb1f039b9e5aa4c10a19401a98ffa3d868b4.scope - libcontainer container 4e818aa4a29603f4ffd5a3a0b987fb1f039b9e5aa4c10a19401a98ffa3d868b4. Nov 7 23:57:51.604363 systemd[1]: Started cri-containerd-219a8d9116c39f4848b3d68ec51451fc02b7e2d0b7a2c504422bd0ca1cd0a626.scope - libcontainer container 219a8d9116c39f4848b3d68ec51451fc02b7e2d0b7a2c504422bd0ca1cd0a626. Nov 7 23:57:51.644462 containerd[1559]: time="2025-11-07T23:57:51.644417984Z" level=info msg="StartContainer for \"4e818aa4a29603f4ffd5a3a0b987fb1f039b9e5aa4c10a19401a98ffa3d868b4\" returns successfully" Nov 7 23:57:51.644778 containerd[1559]: time="2025-11-07T23:57:51.644743824Z" level=info msg="StartContainer for \"9c92223f6ecb4f222ad015eb3532d98b12d97603f7b1e45969ce20de66dfedf9\" returns successfully" Nov 7 23:57:51.658066 containerd[1559]: time="2025-11-07T23:57:51.657899984Z" level=info msg="StartContainer for \"219a8d9116c39f4848b3d68ec51451fc02b7e2d0b7a2c504422bd0ca1cd0a626\" returns successfully" Nov 7 23:57:51.667033 kubelet[2350]: I1107 23:57:51.666916 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:57:51.667510 kubelet[2350]: E1107 23:57:51.667373 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Nov 7 23:57:51.710308 kubelet[2350]: E1107 23:57:51.710260 2350 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 7 23:57:51.758870 kubelet[2350]: E1107 23:57:51.758835 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:51.760216 kubelet[2350]: E1107 23:57:51.759649 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.761382 kubelet[2350]: E1107 23:57:51.761345 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:51.761491 kubelet[2350]: E1107 23:57:51.761471 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:51.765548 kubelet[2350]: E1107 23:57:51.765478 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:51.765607 kubelet[2350]: E1107 23:57:51.765592 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:52.470379 kubelet[2350]: I1107 23:57:52.470344 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:57:52.765826 kubelet[2350]: E1107 23:57:52.765728 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:52.766120 kubelet[2350]: E1107 23:57:52.765872 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:52.767797 kubelet[2350]: E1107 23:57:52.767767 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:52.767919 kubelet[2350]: E1107 23:57:52.767899 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:53.769165 kubelet[2350]: E1107 23:57:53.769121 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:53.769528 kubelet[2350]: E1107 23:57:53.769315 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:53.782750 kubelet[2350]: E1107 23:57:53.782699 2350 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 7 23:57:53.782871 kubelet[2350]: E1107 23:57:53.782846 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:53.887414 kubelet[2350]: E1107 23:57:53.887315 2350 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 7 23:57:54.080588 kubelet[2350]: I1107 23:57:54.080454 2350 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 7 23:57:54.124248 kubelet[2350]: I1107 23:57:54.124197 2350 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:54.132114 kubelet[2350]: E1107 23:57:54.132070 2350 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:54.132114 kubelet[2350]: I1107 23:57:54.132105 2350 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 23:57:54.135096 kubelet[2350]: E1107 23:57:54.134930 2350 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 7 23:57:54.135096 kubelet[2350]: I1107 23:57:54.135042 2350 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:54.137358 kubelet[2350]: E1107 23:57:54.137289 2350 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:54.710623 kubelet[2350]: I1107 23:57:54.710547 2350 apiserver.go:52] "Watching apiserver" Nov 7 23:57:54.723829 kubelet[2350]: I1107 23:57:54.723749 2350 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 7 23:57:56.147158 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-7.scope)... Nov 7 23:57:56.147181 systemd[1]: Reloading... Nov 7 23:57:56.228420 zram_generator::config[2689]: No configuration found. Nov 7 23:57:56.440442 systemd[1]: Reloading finished in 292 ms. Nov 7 23:57:56.468057 kubelet[2350]: I1107 23:57:56.467982 2350 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 7 23:57:56.468185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:57:56.483077 systemd[1]: kubelet.service: Deactivated successfully. Nov 7 23:57:56.483339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:56.483407 systemd[1]: kubelet.service: Consumed 1.392s CPU time, 127.3M memory peak. Nov 7 23:57:56.485409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 7 23:57:56.642546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 7 23:57:56.657508 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 7 23:57:56.817051 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 23:57:56.817051 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 7 23:57:56.817051 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 7 23:57:56.817051 kubelet[2728]: I1107 23:57:56.816646 2728 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 7 23:57:56.830733 kubelet[2728]: I1107 23:57:56.830665 2728 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 7 23:57:56.830733 kubelet[2728]: I1107 23:57:56.830706 2728 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 7 23:57:56.830963 kubelet[2728]: I1107 23:57:56.830945 2728 server.go:956] "Client rotation is on, will bootstrap in background" Nov 7 23:57:56.832787 kubelet[2728]: I1107 23:57:56.832762 2728 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 7 23:57:56.835316 kubelet[2728]: I1107 23:57:56.835269 2728 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 7 23:57:56.840126 kubelet[2728]: I1107 23:57:56.839065 2728 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 7 23:57:56.842585 kubelet[2728]: I1107 23:57:56.842544 2728 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 7 23:57:56.842778 kubelet[2728]: I1107 23:57:56.842750 2728 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 7 23:57:56.843036 kubelet[2728]: I1107 23:57:56.842780 2728 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 7 23:57:56.843114 kubelet[2728]: I1107 23:57:56.843040 2728 topology_manager.go:138] "Creating topology manager with none policy" Nov 7 23:57:56.843114 kubelet[2728]: I1107 23:57:56.843052 2728 container_manager_linux.go:303] "Creating device plugin manager" Nov 7 23:57:56.843114 kubelet[2728]: I1107 23:57:56.843096 2728 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:57:56.843275 kubelet[2728]: I1107 23:57:56.843261 2728 kubelet.go:480] "Attempting to sync node with API server" Nov 7 23:57:56.843304 kubelet[2728]: I1107 23:57:56.843280 2728 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 7 23:57:56.843328 kubelet[2728]: I1107 23:57:56.843305 2728 kubelet.go:386] "Adding apiserver pod source" Nov 7 23:57:56.843328 kubelet[2728]: I1107 23:57:56.843318 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 7 23:57:56.845481 kubelet[2728]: I1107 23:57:56.845460 2728 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 7 23:57:56.846120 kubelet[2728]: I1107 23:57:56.846083 2728 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 7 23:57:56.848277 kubelet[2728]: I1107 23:57:56.848255 2728 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 7 23:57:56.848403 kubelet[2728]: I1107 23:57:56.848391 2728 server.go:1289] "Started kubelet" Nov 7 23:57:56.848594 kubelet[2728]: I1107 23:57:56.848526 2728 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 7 23:57:56.848751 kubelet[2728]: I1107 23:57:56.848707 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 7 23:57:56.849068 kubelet[2728]: I1107 23:57:56.849046 2728 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 7 23:57:56.854233 kubelet[2728]: I1107 23:57:56.851645 2728 server.go:317] "Adding debug handlers to kubelet server" Nov 7 23:57:56.854233 kubelet[2728]: I1107 23:57:56.851815 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 7 23:57:56.854233 kubelet[2728]: I1107 23:57:56.852332 2728 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 7 23:57:56.854233 kubelet[2728]: E1107 23:57:56.853240 2728 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 7 23:57:56.854233 kubelet[2728]: I1107 23:57:56.853268 2728 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 7 23:57:56.854233 kubelet[2728]: I1107 23:57:56.853497 2728 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 7 23:57:56.854233 kubelet[2728]: I1107 23:57:56.853630 2728 reconciler.go:26] "Reconciler: start to sync state" Nov 7 23:57:56.855193 kubelet[2728]: I1107 23:57:56.854955 2728 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 7 23:57:56.862425 kubelet[2728]: I1107 23:57:56.862386 2728 factory.go:223] Registration of the containerd container factory successfully Nov 7 23:57:56.862425 kubelet[2728]: I1107 23:57:56.862413 2728 factory.go:223] Registration of the systemd container factory successfully Nov 7 23:57:56.869445 kubelet[2728]: E1107 23:57:56.869403 2728 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 7 23:57:56.875907 kubelet[2728]: I1107 23:57:56.875849 2728 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 7 23:57:56.876949 kubelet[2728]: I1107 23:57:56.876912 2728 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 7 23:57:56.876949 kubelet[2728]: I1107 23:57:56.876942 2728 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 7 23:57:56.877029 kubelet[2728]: I1107 23:57:56.876963 2728 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 7 23:57:56.877029 kubelet[2728]: I1107 23:57:56.876973 2728 kubelet.go:2436] "Starting kubelet main sync loop" Nov 7 23:57:56.877075 kubelet[2728]: E1107 23:57:56.877015 2728 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 7 23:57:56.924967 kubelet[2728]: I1107 23:57:56.924932 2728 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925211 2728 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925240 2728 state_mem.go:36] "Initialized new in-memory state store" Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925377 2728 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925388 2728 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925405 2728 policy_none.go:49] "None policy: Start" Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925415 2728 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925423 2728 state_mem.go:35] "Initializing new in-memory state store" Nov 7 23:57:56.926078 kubelet[2728]: I1107 23:57:56.925506 2728 state_mem.go:75] "Updated machine memory state" Nov 7 23:57:56.930182 kubelet[2728]: E1107 23:57:56.930107 2728 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 7 23:57:56.930323 kubelet[2728]: I1107 23:57:56.930301 2728 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 7 23:57:56.930362 kubelet[2728]: I1107 23:57:56.930328 2728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 7 23:57:56.930914 kubelet[2728]: I1107 23:57:56.930878 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 7 23:57:56.935627 kubelet[2728]: E1107 23:57:56.935597 2728 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 7 23:57:56.977967 kubelet[2728]: I1107 23:57:56.977930 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:56.977967 kubelet[2728]: I1107 23:57:56.977947 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:56.978178 kubelet[2728]: I1107 23:57:56.977945 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 23:57:57.035656 kubelet[2728]: I1107 23:57:57.035618 2728 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 7 23:57:57.044285 kubelet[2728]: I1107 23:57:57.044248 2728 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 7 23:57:57.044399 kubelet[2728]: I1107 23:57:57.044352 2728 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 7 23:57:57.054164 kubelet[2728]: I1107 23:57:57.054024 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:57.054164 kubelet[2728]: I1107 23:57:57.054069 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:57.054164 kubelet[2728]: I1107 23:57:57.054095 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:57.054164 kubelet[2728]: I1107 23:57:57.054117 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:57.054164 kubelet[2728]: I1107 23:57:57.054135 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:57.054372 kubelet[2728]: I1107 23:57:57.054168 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e74ed79fe85e061b1c9591574419c52-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0e74ed79fe85e061b1c9591574419c52\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:57.054372 kubelet[2728]: I1107 23:57:57.054184 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e74ed79fe85e061b1c9591574419c52-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0e74ed79fe85e061b1c9591574419c52\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:57.054372 kubelet[2728]: I1107 23:57:57.054205 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 7 23:57:57.054372 kubelet[2728]: I1107 23:57:57.054222 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e74ed79fe85e061b1c9591574419c52-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0e74ed79fe85e061b1c9591574419c52\") " pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:57.286357 kubelet[2728]: E1107 23:57:57.286245 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:57.286717 kubelet[2728]: E1107 23:57:57.286563 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:57.286717 kubelet[2728]: E1107 23:57:57.286621 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:57.845079 kubelet[2728]: I1107 23:57:57.845022 2728 apiserver.go:52] "Watching apiserver" Nov 7 23:57:57.854181 kubelet[2728]: I1107 23:57:57.854132 2728 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 7 23:57:57.905050 kubelet[2728]: I1107 23:57:57.904171 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:57.905050 kubelet[2728]: I1107 23:57:57.904241 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:57.905050 kubelet[2728]: I1107 23:57:57.904314 2728 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 7 23:57:57.911568 kubelet[2728]: E1107 23:57:57.911522 2728 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 7 23:57:57.911792 kubelet[2728]: E1107 23:57:57.911767 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:57.912384 kubelet[2728]: E1107 23:57:57.912351 2728 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 7 23:57:57.913183 kubelet[2728]: E1107 23:57:57.912525 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:57.913183 kubelet[2728]: E1107 23:57:57.912558 2728 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 7 23:57:57.913183 kubelet[2728]: E1107 23:57:57.912709 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:57.930188 kubelet[2728]: I1107 23:57:57.929660 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.929629967 podStartE2EDuration="1.929629967s" podCreationTimestamp="2025-11-07 23:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:57:57.929591009 +0000 UTC m=+1.267893368" watchObservedRunningTime="2025-11-07 23:57:57.929629967 +0000 UTC m=+1.267932326" Nov 7 23:57:57.949752 kubelet[2728]: I1107 23:57:57.949589 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.949572946 podStartE2EDuration="1.949572946s" podCreationTimestamp="2025-11-07 23:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:57:57.939268008 +0000 UTC m=+1.277570367" watchObservedRunningTime="2025-11-07 23:57:57.949572946 +0000 UTC m=+1.287875305" Nov 7 23:57:57.950579 kubelet[2728]: I1107 23:57:57.950479 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.950467677 podStartE2EDuration="1.950467677s" podCreationTimestamp="2025-11-07 23:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:57:57.950278323 +0000 UTC m=+1.288580642" watchObservedRunningTime="2025-11-07 23:57:57.950467677 +0000 UTC m=+1.288770036" Nov 7 23:57:58.907026 kubelet[2728]: E1107 23:57:58.906811 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:58.907026 kubelet[2728]: E1107 23:57:58.906907 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:58.907628 kubelet[2728]: E1107 23:57:58.907295 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:57:59.908458 kubelet[2728]: E1107 23:57:59.908415 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:01.099303 kubelet[2728]: E1107 23:58:01.099181 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:01.912227 kubelet[2728]: E1107 23:58:01.911578 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:02.809928 kubelet[2728]: I1107 23:58:02.809827 2728 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 7 23:58:02.812112 kubelet[2728]: I1107 23:58:02.810339 2728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 7 23:58:02.812256 containerd[1559]: time="2025-11-07T23:58:02.810095921Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 7 23:58:02.915330 kubelet[2728]: E1107 23:58:02.915129 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:03.689272 systemd[1]: Created slice kubepods-besteffort-poda3d6b4b7_6b2c_40ea_a09e_141b85ce0316.slice - libcontainer container kubepods-besteffort-poda3d6b4b7_6b2c_40ea_a09e_141b85ce0316.slice. Nov 7 23:58:03.698884 kubelet[2728]: I1107 23:58:03.698843 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ffw\" (UniqueName: \"kubernetes.io/projected/a3d6b4b7-6b2c-40ea-a09e-141b85ce0316-kube-api-access-k8ffw\") pod \"kube-proxy-7qwx4\" (UID: \"a3d6b4b7-6b2c-40ea-a09e-141b85ce0316\") " pod="kube-system/kube-proxy-7qwx4" Nov 7 23:58:03.698884 kubelet[2728]: I1107 23:58:03.698891 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3d6b4b7-6b2c-40ea-a09e-141b85ce0316-kube-proxy\") pod \"kube-proxy-7qwx4\" (UID: \"a3d6b4b7-6b2c-40ea-a09e-141b85ce0316\") " pod="kube-system/kube-proxy-7qwx4" Nov 7 23:58:03.699036 kubelet[2728]: I1107 23:58:03.698914 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d6b4b7-6b2c-40ea-a09e-141b85ce0316-xtables-lock\") pod \"kube-proxy-7qwx4\" (UID: \"a3d6b4b7-6b2c-40ea-a09e-141b85ce0316\") " pod="kube-system/kube-proxy-7qwx4" Nov 7 23:58:03.699036 kubelet[2728]: I1107 23:58:03.698928 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d6b4b7-6b2c-40ea-a09e-141b85ce0316-lib-modules\") pod \"kube-proxy-7qwx4\" (UID: \"a3d6b4b7-6b2c-40ea-a09e-141b85ce0316\") " pod="kube-system/kube-proxy-7qwx4" Nov 7 23:58:03.959460 systemd[1]: Created slice kubepods-besteffort-pod42edc2e5_1ecc_4caa_8836_7cec50b71d55.slice - libcontainer container kubepods-besteffort-pod42edc2e5_1ecc_4caa_8836_7cec50b71d55.slice. Nov 7 23:58:04.000338 kubelet[2728]: I1107 23:58:04.000274 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhnw\" (UniqueName: \"kubernetes.io/projected/42edc2e5-1ecc-4caa-8836-7cec50b71d55-kube-api-access-thhnw\") pod \"tigera-operator-7dcd859c48-9tkhk\" (UID: \"42edc2e5-1ecc-4caa-8836-7cec50b71d55\") " pod="tigera-operator/tigera-operator-7dcd859c48-9tkhk" Nov 7 23:58:04.000338 kubelet[2728]: I1107 23:58:04.000334 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42edc2e5-1ecc-4caa-8836-7cec50b71d55-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9tkhk\" (UID: \"42edc2e5-1ecc-4caa-8836-7cec50b71d55\") " pod="tigera-operator/tigera-operator-7dcd859c48-9tkhk" Nov 7 23:58:04.004471 kubelet[2728]: E1107 23:58:04.004395 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:04.005169 containerd[1559]: time="2025-11-07T23:58:04.004992028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qwx4,Uid:a3d6b4b7-6b2c-40ea-a09e-141b85ce0316,Namespace:kube-system,Attempt:0,}" Nov 7 23:58:04.020790 containerd[1559]: time="2025-11-07T23:58:04.020748816Z" level=info msg="connecting to shim 0ce87c27bf8a2cf086865196a0f7679debaf71dffcfe3c5c29571844f93bfa8e" address="unix:///run/containerd/s/40cedbe350cf28971435e1ec3908e9d4c4e140c18cf4b75fb967811d629c52f3" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:04.047331 systemd[1]: Started cri-containerd-0ce87c27bf8a2cf086865196a0f7679debaf71dffcfe3c5c29571844f93bfa8e.scope - libcontainer container 0ce87c27bf8a2cf086865196a0f7679debaf71dffcfe3c5c29571844f93bfa8e. Nov 7 23:58:04.070808 containerd[1559]: time="2025-11-07T23:58:04.070768480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qwx4,Uid:a3d6b4b7-6b2c-40ea-a09e-141b85ce0316,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ce87c27bf8a2cf086865196a0f7679debaf71dffcfe3c5c29571844f93bfa8e\"" Nov 7 23:58:04.071928 kubelet[2728]: E1107 23:58:04.071722 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:04.077766 containerd[1559]: time="2025-11-07T23:58:04.077727214Z" level=info msg="CreateContainer within sandbox \"0ce87c27bf8a2cf086865196a0f7679debaf71dffcfe3c5c29571844f93bfa8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 7 23:58:04.088794 containerd[1559]: time="2025-11-07T23:58:04.087668644Z" level=info msg="Container 0aaed4429aeb9af9dcc3bf284918bb6b1a7cdd1feb7277c62e5ebc45fd2dcb4c: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:04.096516 containerd[1559]: time="2025-11-07T23:58:04.096453618Z" level=info msg="CreateContainer within sandbox \"0ce87c27bf8a2cf086865196a0f7679debaf71dffcfe3c5c29571844f93bfa8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0aaed4429aeb9af9dcc3bf284918bb6b1a7cdd1feb7277c62e5ebc45fd2dcb4c\"" Nov 7 23:58:04.097027 containerd[1559]: time="2025-11-07T23:58:04.096998567Z" level=info msg="StartContainer for \"0aaed4429aeb9af9dcc3bf284918bb6b1a7cdd1feb7277c62e5ebc45fd2dcb4c\"" Nov 7 23:58:04.098670 containerd[1559]: time="2025-11-07T23:58:04.098631852Z" level=info msg="connecting to shim 0aaed4429aeb9af9dcc3bf284918bb6b1a7cdd1feb7277c62e5ebc45fd2dcb4c" address="unix:///run/containerd/s/40cedbe350cf28971435e1ec3908e9d4c4e140c18cf4b75fb967811d629c52f3" protocol=ttrpc version=3 Nov 7 23:58:04.128359 systemd[1]: Started cri-containerd-0aaed4429aeb9af9dcc3bf284918bb6b1a7cdd1feb7277c62e5ebc45fd2dcb4c.scope - libcontainer container 0aaed4429aeb9af9dcc3bf284918bb6b1a7cdd1feb7277c62e5ebc45fd2dcb4c. Nov 7 23:58:04.207404 containerd[1559]: time="2025-11-07T23:58:04.207326639Z" level=info msg="StartContainer for \"0aaed4429aeb9af9dcc3bf284918bb6b1a7cdd1feb7277c62e5ebc45fd2dcb4c\" returns successfully" Nov 7 23:58:04.263718 containerd[1559]: time="2025-11-07T23:58:04.263623731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9tkhk,Uid:42edc2e5-1ecc-4caa-8836-7cec50b71d55,Namespace:tigera-operator,Attempt:0,}" Nov 7 23:58:04.284401 containerd[1559]: time="2025-11-07T23:58:04.284348174Z" level=info msg="connecting to shim 5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128" address="unix:///run/containerd/s/ac876bdb3d134ff5876ba7b3418abd673ba68d0a452c364038d008729202a244" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:04.311346 systemd[1]: Started cri-containerd-5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128.scope - libcontainer container 5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128. Nov 7 23:58:04.353714 containerd[1559]: time="2025-11-07T23:58:04.353656271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9tkhk,Uid:42edc2e5-1ecc-4caa-8836-7cec50b71d55,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128\"" Nov 7 23:58:04.356323 containerd[1559]: time="2025-11-07T23:58:04.356258736Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 7 23:58:04.922811 kubelet[2728]: E1107 23:58:04.922775 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:04.934161 kubelet[2728]: I1107 23:58:04.933468 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7qwx4" podStartSLOduration=1.933455317 podStartE2EDuration="1.933455317s" podCreationTimestamp="2025-11-07 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:58:04.93332596 +0000 UTC m=+8.271628319" watchObservedRunningTime="2025-11-07 23:58:04.933455317 +0000 UTC m=+8.271757676" Nov 7 23:58:05.513030 kubelet[2728]: E1107 23:58:05.512999 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:05.710007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463966748.mount: Deactivated successfully. Nov 7 23:58:05.926197 kubelet[2728]: E1107 23:58:05.925646 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:06.783673 containerd[1559]: time="2025-11-07T23:58:06.783611597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:06.784267 containerd[1559]: time="2025-11-07T23:58:06.784236985Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 7 23:58:06.785212 containerd[1559]: time="2025-11-07T23:58:06.785182568Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:06.787210 containerd[1559]: time="2025-11-07T23:58:06.787163771Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:06.788157 containerd[1559]: time="2025-11-07T23:58:06.787705801Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.431412025s" Nov 7 23:58:06.788157 containerd[1559]: time="2025-11-07T23:58:06.787742320Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 7 23:58:06.791868 containerd[1559]: time="2025-11-07T23:58:06.791836404Z" level=info msg="CreateContainer within sandbox \"5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 7 23:58:06.800246 containerd[1559]: time="2025-11-07T23:58:06.800198689Z" level=info msg="Container 38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:06.805256 containerd[1559]: time="2025-11-07T23:58:06.805213756Z" level=info msg="CreateContainer within sandbox \"5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516\"" Nov 7 23:58:06.805956 containerd[1559]: time="2025-11-07T23:58:06.805933503Z" level=info msg="StartContainer for \"38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516\"" Nov 7 23:58:06.806934 containerd[1559]: time="2025-11-07T23:58:06.806906805Z" level=info msg="connecting to shim 38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516" address="unix:///run/containerd/s/ac876bdb3d134ff5876ba7b3418abd673ba68d0a452c364038d008729202a244" protocol=ttrpc version=3 Nov 7 23:58:06.821321 kubelet[2728]: E1107 23:58:06.820874 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:06.858329 systemd[1]: Started cri-containerd-38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516.scope - libcontainer container 38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516. Nov 7 23:58:06.892832 containerd[1559]: time="2025-11-07T23:58:06.892795772Z" level=info msg="StartContainer for \"38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516\" returns successfully" Nov 7 23:58:06.957511 kubelet[2728]: E1107 23:58:06.957479 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:06.957663 kubelet[2728]: E1107 23:58:06.957536 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:09.254477 systemd[1]: cri-containerd-38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516.scope: Deactivated successfully. Nov 7 23:58:09.301019 containerd[1559]: time="2025-11-07T23:58:09.300957775Z" level=info msg="received container exit event container_id:\"38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516\" id:\"38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516\" pid:3065 exit_status:1 exited_at:{seconds:1762559889 nanos:286733593}" Nov 7 23:58:09.377238 update_engine[1539]: I20251107 23:58:09.377174 1539 update_attempter.cc:509] Updating boot flags... Nov 7 23:58:09.416012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516-rootfs.mount: Deactivated successfully. Nov 7 23:58:09.975888 kubelet[2728]: I1107 23:58:09.975819 2728 scope.go:117] "RemoveContainer" containerID="38f0c45a6d14beaf8e258d9d91c9db98b9d5daa5ce819a9f72412f2ef414b516" Nov 7 23:58:09.984408 containerd[1559]: time="2025-11-07T23:58:09.983315548Z" level=info msg="CreateContainer within sandbox \"5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 7 23:58:10.036889 containerd[1559]: time="2025-11-07T23:58:10.036289252Z" level=info msg="Container c324f5f04ee5a4ccaffcf60c725bee309cd4906b170e18209aa950d97df593a5: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:10.045704 containerd[1559]: time="2025-11-07T23:58:10.045648838Z" level=info msg="CreateContainer within sandbox \"5e7ff6684a067d684f9fa0329d1d959eb98de4452a1ec852dd8c8aba8f007128\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c324f5f04ee5a4ccaffcf60c725bee309cd4906b170e18209aa950d97df593a5\"" Nov 7 23:58:10.046184 containerd[1559]: time="2025-11-07T23:58:10.046161751Z" level=info msg="StartContainer for \"c324f5f04ee5a4ccaffcf60c725bee309cd4906b170e18209aa950d97df593a5\"" Nov 7 23:58:10.047531 containerd[1559]: time="2025-11-07T23:58:10.047480172Z" level=info msg="connecting to shim c324f5f04ee5a4ccaffcf60c725bee309cd4906b170e18209aa950d97df593a5" address="unix:///run/containerd/s/ac876bdb3d134ff5876ba7b3418abd673ba68d0a452c364038d008729202a244" protocol=ttrpc version=3 Nov 7 23:58:10.075345 systemd[1]: Started cri-containerd-c324f5f04ee5a4ccaffcf60c725bee309cd4906b170e18209aa950d97df593a5.scope - libcontainer container c324f5f04ee5a4ccaffcf60c725bee309cd4906b170e18209aa950d97df593a5. Nov 7 23:58:10.113310 containerd[1559]: time="2025-11-07T23:58:10.113261190Z" level=info msg="StartContainer for \"c324f5f04ee5a4ccaffcf60c725bee309cd4906b170e18209aa950d97df593a5\" returns successfully" Nov 7 23:58:10.981093 kubelet[2728]: I1107 23:58:10.981025 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9tkhk" podStartSLOduration=5.547824928 podStartE2EDuration="7.980974999s" podCreationTimestamp="2025-11-07 23:58:03 +0000 UTC" firstStartedPulling="2025-11-07 23:58:04.35562279 +0000 UTC m=+7.693925149" lastFinishedPulling="2025-11-07 23:58:06.788772861 +0000 UTC m=+10.127075220" observedRunningTime="2025-11-07 23:58:06.97970628 +0000 UTC m=+10.318008639" watchObservedRunningTime="2025-11-07 23:58:10.980974999 +0000 UTC m=+14.319277358" Nov 7 23:58:12.443851 sudo[1769]: pam_unix(sudo:session): session closed for user root Nov 7 23:58:12.446021 sshd[1768]: Connection closed by 10.0.0.1 port 49952 Nov 7 23:58:12.446730 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:12.452262 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:49952.service: Deactivated successfully. Nov 7 23:58:12.455016 systemd[1]: session-7.scope: Deactivated successfully. Nov 7 23:58:12.455234 systemd[1]: session-7.scope: Consumed 7.985s CPU time, 213.9M memory peak. Nov 7 23:58:12.456232 systemd-logind[1536]: Session 7 logged out. Waiting for processes to exit. Nov 7 23:58:12.457442 systemd-logind[1536]: Removed session 7. Nov 7 23:58:21.821867 systemd[1]: Created slice kubepods-besteffort-poda2810366_71fe_4027_bd77_9f315917e6a6.slice - libcontainer container kubepods-besteffort-poda2810366_71fe_4027_bd77_9f315917e6a6.slice. Nov 7 23:58:21.883891 systemd[1]: Created slice kubepods-besteffort-pod899ced73_c935_4f2a_ae4a_a3aaf8623def.slice - libcontainer container kubepods-besteffort-pod899ced73_c935_4f2a_ae4a_a3aaf8623def.slice. Nov 7 23:58:21.920912 kubelet[2728]: I1107 23:58:21.920844 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-xtables-lock\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.920912 kubelet[2728]: I1107 23:58:21.920898 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljxg\" (UniqueName: \"kubernetes.io/projected/899ced73-c935-4f2a-ae4a-a3aaf8623def-kube-api-access-5ljxg\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.920912 kubelet[2728]: I1107 23:58:21.920923 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn78t\" (UniqueName: \"kubernetes.io/projected/a2810366-71fe-4027-bd77-9f315917e6a6-kube-api-access-mn78t\") pod \"calico-typha-6bdd75c7c8-k9fm6\" (UID: \"a2810366-71fe-4027-bd77-9f315917e6a6\") " pod="calico-system/calico-typha-6bdd75c7c8-k9fm6" Nov 7 23:58:21.921372 kubelet[2728]: I1107 23:58:21.920940 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-flexvol-driver-host\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921372 kubelet[2728]: I1107 23:58:21.920956 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/899ced73-c935-4f2a-ae4a-a3aaf8623def-node-certs\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921372 kubelet[2728]: I1107 23:58:21.920971 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-lib-modules\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921372 kubelet[2728]: I1107 23:58:21.920986 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-cni-log-dir\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921372 kubelet[2728]: I1107 23:58:21.921004 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-cni-net-dir\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921477 kubelet[2728]: I1107 23:58:21.921019 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-var-run-calico\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921477 kubelet[2728]: I1107 23:58:21.921034 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2810366-71fe-4027-bd77-9f315917e6a6-tigera-ca-bundle\") pod \"calico-typha-6bdd75c7c8-k9fm6\" (UID: \"a2810366-71fe-4027-bd77-9f315917e6a6\") " pod="calico-system/calico-typha-6bdd75c7c8-k9fm6" Nov 7 23:58:21.921477 kubelet[2728]: I1107 23:58:21.921048 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/899ced73-c935-4f2a-ae4a-a3aaf8623def-tigera-ca-bundle\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921477 kubelet[2728]: I1107 23:58:21.921062 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-policysync\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921477 kubelet[2728]: I1107 23:58:21.921075 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-var-lib-calico\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.921598 kubelet[2728]: I1107 23:58:21.921091 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a2810366-71fe-4027-bd77-9f315917e6a6-typha-certs\") pod \"calico-typha-6bdd75c7c8-k9fm6\" (UID: \"a2810366-71fe-4027-bd77-9f315917e6a6\") " pod="calico-system/calico-typha-6bdd75c7c8-k9fm6" Nov 7 23:58:21.921598 kubelet[2728]: I1107 23:58:21.921105 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/899ced73-c935-4f2a-ae4a-a3aaf8623def-cni-bin-dir\") pod \"calico-node-vqj29\" (UID: \"899ced73-c935-4f2a-ae4a-a3aaf8623def\") " pod="calico-system/calico-node-vqj29" Nov 7 23:58:21.990844 kubelet[2728]: E1107 23:58:21.990765 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:22.022812 kubelet[2728]: I1107 23:58:22.021500 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59a419f6-34bd-4030-8aca-5c108260b7ed-kubelet-dir\") pod \"csi-node-driver-zwgj8\" (UID: \"59a419f6-34bd-4030-8aca-5c108260b7ed\") " pod="calico-system/csi-node-driver-zwgj8" Nov 7 23:58:22.022812 kubelet[2728]: I1107 23:58:22.021555 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc8qr\" (UniqueName: \"kubernetes.io/projected/59a419f6-34bd-4030-8aca-5c108260b7ed-kube-api-access-sc8qr\") pod \"csi-node-driver-zwgj8\" (UID: \"59a419f6-34bd-4030-8aca-5c108260b7ed\") " pod="calico-system/csi-node-driver-zwgj8" Nov 7 23:58:22.022812 kubelet[2728]: I1107 23:58:22.021607 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59a419f6-34bd-4030-8aca-5c108260b7ed-registration-dir\") pod \"csi-node-driver-zwgj8\" (UID: \"59a419f6-34bd-4030-8aca-5c108260b7ed\") " pod="calico-system/csi-node-driver-zwgj8" Nov 7 23:58:22.022812 kubelet[2728]: I1107 23:58:22.021699 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59a419f6-34bd-4030-8aca-5c108260b7ed-socket-dir\") pod \"csi-node-driver-zwgj8\" (UID: \"59a419f6-34bd-4030-8aca-5c108260b7ed\") " pod="calico-system/csi-node-driver-zwgj8" Nov 7 23:58:22.022812 kubelet[2728]: I1107 23:58:22.021812 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/59a419f6-34bd-4030-8aca-5c108260b7ed-varrun\") pod \"csi-node-driver-zwgj8\" (UID: \"59a419f6-34bd-4030-8aca-5c108260b7ed\") " pod="calico-system/csi-node-driver-zwgj8" Nov 7 23:58:22.028913 kubelet[2728]: E1107 23:58:22.028881 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.028913 kubelet[2728]: W1107 23:58:22.028906 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.030243 kubelet[2728]: E1107 23:58:22.030210 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.037181 kubelet[2728]: E1107 23:58:22.037125 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.037181 kubelet[2728]: W1107 23:58:22.037170 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.037447 kubelet[2728]: E1107 23:58:22.037205 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.046767 kubelet[2728]: E1107 23:58:22.046706 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.046767 kubelet[2728]: W1107 23:58:22.046731 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.047234 kubelet[2728]: E1107 23:58:22.047167 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.058130 kubelet[2728]: E1107 23:58:22.058041 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.058130 kubelet[2728]: W1107 23:58:22.058064 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.058130 kubelet[2728]: E1107 23:58:22.058085 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.124153 kubelet[2728]: E1107 23:58:22.123994 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.124153 kubelet[2728]: W1107 23:58:22.124020 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.124153 kubelet[2728]: E1107 23:58:22.124041 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.124459 kubelet[2728]: E1107 23:58:22.124301 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.124459 kubelet[2728]: W1107 23:58:22.124312 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.124459 kubelet[2728]: E1107 23:58:22.124324 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.126704 kubelet[2728]: E1107 23:58:22.126657 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:22.127384 containerd[1559]: time="2025-11-07T23:58:22.127348045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bdd75c7c8-k9fm6,Uid:a2810366-71fe-4027-bd77-9f315917e6a6,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:22.130753 kubelet[2728]: E1107 23:58:22.130722 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.130753 kubelet[2728]: W1107 23:58:22.130751 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.130948 kubelet[2728]: E1107 23:58:22.130777 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.131233 kubelet[2728]: E1107 23:58:22.131173 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.131233 kubelet[2728]: W1107 23:58:22.131192 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.131233 kubelet[2728]: E1107 23:58:22.131204 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.131415 kubelet[2728]: E1107 23:58:22.131368 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.131415 kubelet[2728]: W1107 23:58:22.131382 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.131415 kubelet[2728]: E1107 23:58:22.131392 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.131677 kubelet[2728]: E1107 23:58:22.131657 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.131677 kubelet[2728]: W1107 23:58:22.131671 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.131749 kubelet[2728]: E1107 23:58:22.131681 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.131844 kubelet[2728]: E1107 23:58:22.131833 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.131844 kubelet[2728]: W1107 23:58:22.131844 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.131892 kubelet[2728]: E1107 23:58:22.131853 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.132022 kubelet[2728]: E1107 23:58:22.132012 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.132022 kubelet[2728]: W1107 23:58:22.132022 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.132070 kubelet[2728]: E1107 23:58:22.132030 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.132251 kubelet[2728]: E1107 23:58:22.132237 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.132251 kubelet[2728]: W1107 23:58:22.132250 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.132314 kubelet[2728]: E1107 23:58:22.132258 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.133918 kubelet[2728]: E1107 23:58:22.133838 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.133918 kubelet[2728]: W1107 23:58:22.133856 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.133918 kubelet[2728]: E1107 23:58:22.133871 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.134238 kubelet[2728]: E1107 23:58:22.134220 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.134282 kubelet[2728]: W1107 23:58:22.134238 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.134282 kubelet[2728]: E1107 23:58:22.134252 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.134487 kubelet[2728]: E1107 23:58:22.134472 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.134487 kubelet[2728]: W1107 23:58:22.134487 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.134649 kubelet[2728]: E1107 23:58:22.134497 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.134764 kubelet[2728]: E1107 23:58:22.134739 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.134764 kubelet[2728]: W1107 23:58:22.134754 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.134818 kubelet[2728]: E1107 23:58:22.134766 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.135080 kubelet[2728]: E1107 23:58:22.135064 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.135080 kubelet[2728]: W1107 23:58:22.135078 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.135158 kubelet[2728]: E1107 23:58:22.135090 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.135355 kubelet[2728]: E1107 23:58:22.135330 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.135355 kubelet[2728]: W1107 23:58:22.135343 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.135355 kubelet[2728]: E1107 23:58:22.135354 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.135546 kubelet[2728]: E1107 23:58:22.135534 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.135546 kubelet[2728]: W1107 23:58:22.135545 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.135630 kubelet[2728]: E1107 23:58:22.135554 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.135742 kubelet[2728]: E1107 23:58:22.135730 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.135742 kubelet[2728]: W1107 23:58:22.135741 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.135742 kubelet[2728]: E1107 23:58:22.135750 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.136218 kubelet[2728]: E1107 23:58:22.136201 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.136218 kubelet[2728]: W1107 23:58:22.136219 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.136290 kubelet[2728]: E1107 23:58:22.136231 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.136513 kubelet[2728]: E1107 23:58:22.136498 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.136513 kubelet[2728]: W1107 23:58:22.136512 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.136582 kubelet[2728]: E1107 23:58:22.136523 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.136777 kubelet[2728]: E1107 23:58:22.136764 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.136777 kubelet[2728]: W1107 23:58:22.136777 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.136882 kubelet[2728]: E1107 23:58:22.136790 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.136969 kubelet[2728]: E1107 23:58:22.136955 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.136969 kubelet[2728]: W1107 23:58:22.136968 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.137044 kubelet[2728]: E1107 23:58:22.136977 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.137118 kubelet[2728]: E1107 23:58:22.137105 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.137118 kubelet[2728]: W1107 23:58:22.137116 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.137206 kubelet[2728]: E1107 23:58:22.137124 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.137298 kubelet[2728]: E1107 23:58:22.137286 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.137298 kubelet[2728]: W1107 23:58:22.137298 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.137355 kubelet[2728]: E1107 23:58:22.137308 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.137541 kubelet[2728]: E1107 23:58:22.137529 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.137541 kubelet[2728]: W1107 23:58:22.137541 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.137616 kubelet[2728]: E1107 23:58:22.137550 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.137750 kubelet[2728]: E1107 23:58:22.137735 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.137750 kubelet[2728]: W1107 23:58:22.137749 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.137801 kubelet[2728]: E1107 23:58:22.137759 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.149240 kubelet[2728]: E1107 23:58:22.148976 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:22.149240 kubelet[2728]: W1107 23:58:22.149001 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:22.149240 kubelet[2728]: E1107 23:58:22.149021 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:22.163379 containerd[1559]: time="2025-11-07T23:58:22.163322567Z" level=info msg="connecting to shim 304f185f977f249186ea2719f3db66712a52b82ca18f0c155b1515b71a32c5a4" address="unix:///run/containerd/s/104a4420c8d8da5c22e7b6621afdebffcdc1fa4b0ca05c6b94d8c0339cb93e6c" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:22.185341 systemd[1]: Started cri-containerd-304f185f977f249186ea2719f3db66712a52b82ca18f0c155b1515b71a32c5a4.scope - libcontainer container 304f185f977f249186ea2719f3db66712a52b82ca18f0c155b1515b71a32c5a4. Nov 7 23:58:22.186472 kubelet[2728]: E1107 23:58:22.186445 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:22.186991 containerd[1559]: time="2025-11-07T23:58:22.186953891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vqj29,Uid:899ced73-c935-4f2a-ae4a-a3aaf8623def,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:22.212372 containerd[1559]: time="2025-11-07T23:58:22.211657528Z" level=info msg="connecting to shim 91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a" address="unix:///run/containerd/s/dbfe7450123b9c09eeefe8ab4eab202538c3c6d6bffa585dd4eaffe15cd5b2aa" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:22.238374 systemd[1]: Started cri-containerd-91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a.scope - libcontainer container 91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a. Nov 7 23:58:22.246227 containerd[1559]: time="2025-11-07T23:58:22.246169020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bdd75c7c8-k9fm6,Uid:a2810366-71fe-4027-bd77-9f315917e6a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"304f185f977f249186ea2719f3db66712a52b82ca18f0c155b1515b71a32c5a4\"" Nov 7 23:58:22.247313 kubelet[2728]: E1107 23:58:22.247280 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:22.248742 containerd[1559]: time="2025-11-07T23:58:22.248683124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 7 23:58:22.265170 containerd[1559]: time="2025-11-07T23:58:22.265109015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vqj29,Uid:899ced73-c935-4f2a-ae4a-a3aaf8623def,Namespace:calico-system,Attempt:0,} returns sandbox id \"91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a\"" Nov 7 23:58:22.266070 kubelet[2728]: E1107 23:58:22.266046 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:23.315410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984439835.mount: Deactivated successfully. Nov 7 23:58:23.797222 containerd[1559]: time="2025-11-07T23:58:23.797132426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 7 23:58:23.800656 containerd[1559]: time="2025-11-07T23:58:23.800602684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.551874241s" Nov 7 23:58:23.800656 containerd[1559]: time="2025-11-07T23:58:23.800648644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 7 23:58:23.801334 containerd[1559]: time="2025-11-07T23:58:23.801286600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:23.802170 containerd[1559]: time="2025-11-07T23:58:23.801562198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 7 23:58:23.802170 containerd[1559]: time="2025-11-07T23:58:23.801974236Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:23.802665 containerd[1559]: time="2025-11-07T23:58:23.802628992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:23.822624 containerd[1559]: time="2025-11-07T23:58:23.822555188Z" level=info msg="CreateContainer within sandbox \"304f185f977f249186ea2719f3db66712a52b82ca18f0c155b1515b71a32c5a4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 7 23:58:23.829255 containerd[1559]: time="2025-11-07T23:58:23.829203627Z" level=info msg="Container 739bee4e8646523d8743410d0242da54d68855611e1cddf68c1b2508447d8453: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:23.835650 containerd[1559]: time="2025-11-07T23:58:23.835591388Z" level=info msg="CreateContainer within sandbox \"304f185f977f249186ea2719f3db66712a52b82ca18f0c155b1515b71a32c5a4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"739bee4e8646523d8743410d0242da54d68855611e1cddf68c1b2508447d8453\"" Nov 7 23:58:23.836325 containerd[1559]: time="2025-11-07T23:58:23.836072425Z" level=info msg="StartContainer for \"739bee4e8646523d8743410d0242da54d68855611e1cddf68c1b2508447d8453\"" Nov 7 23:58:23.837237 containerd[1559]: time="2025-11-07T23:58:23.837186258Z" level=info msg="connecting to shim 739bee4e8646523d8743410d0242da54d68855611e1cddf68c1b2508447d8453" address="unix:///run/containerd/s/104a4420c8d8da5c22e7b6621afdebffcdc1fa4b0ca05c6b94d8c0339cb93e6c" protocol=ttrpc version=3 Nov 7 23:58:23.860365 systemd[1]: Started cri-containerd-739bee4e8646523d8743410d0242da54d68855611e1cddf68c1b2508447d8453.scope - libcontainer container 739bee4e8646523d8743410d0242da54d68855611e1cddf68c1b2508447d8453. Nov 7 23:58:23.877851 kubelet[2728]: E1107 23:58:23.877441 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:23.903467 containerd[1559]: time="2025-11-07T23:58:23.903422768Z" level=info msg="StartContainer for \"739bee4e8646523d8743410d0242da54d68855611e1cddf68c1b2508447d8453\" returns successfully" Nov 7 23:58:24.009034 kubelet[2728]: E1107 23:58:24.008617 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:24.019287 kubelet[2728]: E1107 23:58:24.019239 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.019287 kubelet[2728]: W1107 23:58:24.019273 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.019287 kubelet[2728]: E1107 23:58:24.019297 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.020419 kubelet[2728]: E1107 23:58:24.020380 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.020419 kubelet[2728]: W1107 23:58:24.020402 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.020419 kubelet[2728]: E1107 23:58:24.020419 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.022753 kubelet[2728]: E1107 23:58:24.022692 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.022753 kubelet[2728]: W1107 23:58:24.022719 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.024234 kubelet[2728]: E1107 23:58:24.024184 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.024597 kubelet[2728]: E1107 23:58:24.024555 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.024597 kubelet[2728]: W1107 23:58:24.024583 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.024597 kubelet[2728]: E1107 23:58:24.024599 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.024840 kubelet[2728]: E1107 23:58:24.024809 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.024840 kubelet[2728]: W1107 23:58:24.024826 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.024840 kubelet[2728]: E1107 23:58:24.024840 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.025832 kubelet[2728]: E1107 23:58:24.025806 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.025832 kubelet[2728]: W1107 23:58:24.025826 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.026119 kubelet[2728]: E1107 23:58:24.025841 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.026207 kubelet[2728]: E1107 23:58:24.026182 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.026207 kubelet[2728]: W1107 23:58:24.026203 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.026262 kubelet[2728]: E1107 23:58:24.026216 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.026669 kubelet[2728]: E1107 23:58:24.026644 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.026669 kubelet[2728]: W1107 23:58:24.026664 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.026743 kubelet[2728]: E1107 23:58:24.026678 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.027111 kubelet[2728]: E1107 23:58:24.027088 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.027845 kubelet[2728]: W1107 23:58:24.027820 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.027902 kubelet[2728]: E1107 23:58:24.027849 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.028114 kubelet[2728]: E1107 23:58:24.028097 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.028114 kubelet[2728]: W1107 23:58:24.028112 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.028193 kubelet[2728]: E1107 23:58:24.028124 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.030839 kubelet[2728]: E1107 23:58:24.030802 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.030839 kubelet[2728]: W1107 23:58:24.030829 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.030839 kubelet[2728]: E1107 23:58:24.030846 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.033415 kubelet[2728]: E1107 23:58:24.033295 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.033415 kubelet[2728]: W1107 23:58:24.033327 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.033415 kubelet[2728]: E1107 23:58:24.033358 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.033794 kubelet[2728]: E1107 23:58:24.033712 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.033794 kubelet[2728]: W1107 23:58:24.033729 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.033794 kubelet[2728]: E1107 23:58:24.033741 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.034924 kubelet[2728]: E1107 23:58:24.034880 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.034924 kubelet[2728]: W1107 23:58:24.034899 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.034924 kubelet[2728]: E1107 23:58:24.034916 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.036557 kubelet[2728]: E1107 23:58:24.035307 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.036557 kubelet[2728]: W1107 23:58:24.035327 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.036557 kubelet[2728]: E1107 23:58:24.035340 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.046298 kubelet[2728]: E1107 23:58:24.046220 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.046298 kubelet[2728]: W1107 23:58:24.046248 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.046298 kubelet[2728]: E1107 23:58:24.046267 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.048077 kubelet[2728]: E1107 23:58:24.047939 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.048077 kubelet[2728]: W1107 23:58:24.047977 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.054460 kubelet[2728]: E1107 23:58:24.047998 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.054931 kubelet[2728]: E1107 23:58:24.054820 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.054931 kubelet[2728]: W1107 23:58:24.054854 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.054931 kubelet[2728]: E1107 23:58:24.054870 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.055225 kubelet[2728]: E1107 23:58:24.055202 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.055225 kubelet[2728]: W1107 23:58:24.055222 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.055291 kubelet[2728]: E1107 23:58:24.055233 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.055518 kubelet[2728]: E1107 23:58:24.055480 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.055518 kubelet[2728]: W1107 23:58:24.055495 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.055518 kubelet[2728]: E1107 23:58:24.055505 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.056678 kubelet[2728]: E1107 23:58:24.055687 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.056678 kubelet[2728]: W1107 23:58:24.055698 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.056678 kubelet[2728]: E1107 23:58:24.055710 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.056678 kubelet[2728]: E1107 23:58:24.055863 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.056678 kubelet[2728]: W1107 23:58:24.055872 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.056678 kubelet[2728]: E1107 23:58:24.055880 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.056678 kubelet[2728]: E1107 23:58:24.056042 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.056678 kubelet[2728]: W1107 23:58:24.056050 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.056678 kubelet[2728]: E1107 23:58:24.056058 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.056678 kubelet[2728]: E1107 23:58:24.056405 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.056902 kubelet[2728]: W1107 23:58:24.056415 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.056902 kubelet[2728]: E1107 23:58:24.056424 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.056902 kubelet[2728]: E1107 23:58:24.056584 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.056902 kubelet[2728]: W1107 23:58:24.056593 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.056902 kubelet[2728]: E1107 23:58:24.056605 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.056902 kubelet[2728]: E1107 23:58:24.056722 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.056902 kubelet[2728]: W1107 23:58:24.056729 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.056902 kubelet[2728]: E1107 23:58:24.056737 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.056902 kubelet[2728]: E1107 23:58:24.056868 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.056902 kubelet[2728]: W1107 23:58:24.056876 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.057078 kubelet[2728]: E1107 23:58:24.056884 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.057078 kubelet[2728]: E1107 23:58:24.057018 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.057078 kubelet[2728]: W1107 23:58:24.057026 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.057078 kubelet[2728]: E1107 23:58:24.057034 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.057949 kubelet[2728]: E1107 23:58:24.057825 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.057949 kubelet[2728]: W1107 23:58:24.057858 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.057949 kubelet[2728]: E1107 23:58:24.057873 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.059311 kubelet[2728]: E1107 23:58:24.059294 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.059814 kubelet[2728]: W1107 23:58:24.059433 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.059814 kubelet[2728]: E1107 23:58:24.059453 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.060347 kubelet[2728]: E1107 23:58:24.060325 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.061167 kubelet[2728]: W1107 23:58:24.060423 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.061167 kubelet[2728]: E1107 23:58:24.060445 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.062092 kubelet[2728]: E1107 23:58:24.061596 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.062239 kubelet[2728]: W1107 23:58:24.062217 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.062306 kubelet[2728]: E1107 23:58:24.062294 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.063254 kubelet[2728]: E1107 23:58:24.063228 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 7 23:58:24.063360 kubelet[2728]: W1107 23:58:24.063344 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 7 23:58:24.063421 kubelet[2728]: E1107 23:58:24.063409 2728 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 7 23:58:24.817985 containerd[1559]: time="2025-11-07T23:58:24.817922182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:24.818601 containerd[1559]: time="2025-11-07T23:58:24.818569019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 7 23:58:24.819417 containerd[1559]: time="2025-11-07T23:58:24.819387014Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:24.821802 containerd[1559]: time="2025-11-07T23:58:24.821760320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:24.822451 containerd[1559]: time="2025-11-07T23:58:24.822423156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.020820878s" Nov 7 23:58:24.822485 containerd[1559]: time="2025-11-07T23:58:24.822460956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 7 23:58:24.828100 containerd[1559]: time="2025-11-07T23:58:24.828049004Z" level=info msg="CreateContainer within sandbox \"91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 7 23:58:24.836376 containerd[1559]: time="2025-11-07T23:58:24.835254162Z" level=info msg="Container e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:24.842506 containerd[1559]: time="2025-11-07T23:58:24.842464440Z" level=info msg="CreateContainer within sandbox \"91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673\"" Nov 7 23:58:24.843401 containerd[1559]: time="2025-11-07T23:58:24.843371755Z" level=info msg="StartContainer for \"e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673\"" Nov 7 23:58:24.845125 containerd[1559]: time="2025-11-07T23:58:24.845092505Z" level=info msg="connecting to shim e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673" address="unix:///run/containerd/s/dbfe7450123b9c09eeefe8ab4eab202538c3c6d6bffa585dd4eaffe15cd5b2aa" protocol=ttrpc version=3 Nov 7 23:58:24.874342 systemd[1]: Started cri-containerd-e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673.scope - libcontainer container e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673. Nov 7 23:58:24.943552 containerd[1559]: time="2025-11-07T23:58:24.943511373Z" level=info msg="StartContainer for \"e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673\" returns successfully" Nov 7 23:58:24.957677 systemd[1]: cri-containerd-e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673.scope: Deactivated successfully. Nov 7 23:58:24.961270 containerd[1559]: time="2025-11-07T23:58:24.961193271Z" level=info msg="received container exit event container_id:\"e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673\" id:\"e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673\" pid:3452 exited_at:{seconds:1762559904 nanos:960931312}" Nov 7 23:58:24.992426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1b6c210da79f3f660b3557e4fb0ebde24e4f97cdbf3d089a5538bd8061c1673-rootfs.mount: Deactivated successfully. Nov 7 23:58:25.011344 kubelet[2728]: I1107 23:58:25.011313 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 7 23:58:25.013006 kubelet[2728]: E1107 23:58:25.011719 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:25.013006 kubelet[2728]: E1107 23:58:25.011762 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:25.093962 kubelet[2728]: I1107 23:58:25.093896 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bdd75c7c8-k9fm6" podStartSLOduration=2.5405370229999997 podStartE2EDuration="4.093878094s" podCreationTimestamp="2025-11-07 23:58:21 +0000 UTC" firstStartedPulling="2025-11-07 23:58:22.248043768 +0000 UTC m=+25.586346127" lastFinishedPulling="2025-11-07 23:58:23.801384839 +0000 UTC m=+27.139687198" observedRunningTime="2025-11-07 23:58:24.026022738 +0000 UTC m=+27.364325097" watchObservedRunningTime="2025-11-07 23:58:25.093878094 +0000 UTC m=+28.432180453" Nov 7 23:58:25.877687 kubelet[2728]: E1107 23:58:25.877633 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:26.015831 kubelet[2728]: E1107 23:58:26.015798 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:26.024530 containerd[1559]: time="2025-11-07T23:58:26.024457759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 7 23:58:27.878642 kubelet[2728]: E1107 23:58:27.878330 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:28.486898 containerd[1559]: time="2025-11-07T23:58:28.486847497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:28.489196 containerd[1559]: time="2025-11-07T23:58:28.489162647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 7 23:58:28.510292 containerd[1559]: time="2025-11-07T23:58:28.510231352Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:28.529881 containerd[1559]: time="2025-11-07T23:58:28.529826104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:28.531084 containerd[1559]: time="2025-11-07T23:58:28.531055579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.50654274s" Nov 7 23:58:28.531162 containerd[1559]: time="2025-11-07T23:58:28.531091179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 7 23:58:28.557664 containerd[1559]: time="2025-11-07T23:58:28.557619780Z" level=info msg="CreateContainer within sandbox \"91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 7 23:58:28.636400 containerd[1559]: time="2025-11-07T23:58:28.636347067Z" level=info msg="Container 4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:28.671842 containerd[1559]: time="2025-11-07T23:58:28.671705028Z" level=info msg="CreateContainer within sandbox \"91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e\"" Nov 7 23:58:28.673076 containerd[1559]: time="2025-11-07T23:58:28.672386585Z" level=info msg="StartContainer for \"4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e\"" Nov 7 23:58:28.675255 containerd[1559]: time="2025-11-07T23:58:28.675201493Z" level=info msg="connecting to shim 4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e" address="unix:///run/containerd/s/dbfe7450123b9c09eeefe8ab4eab202538c3c6d6bffa585dd4eaffe15cd5b2aa" protocol=ttrpc version=3 Nov 7 23:58:28.697370 systemd[1]: Started cri-containerd-4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e.scope - libcontainer container 4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e. Nov 7 23:58:28.790384 containerd[1559]: time="2025-11-07T23:58:28.790266617Z" level=info msg="StartContainer for \"4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e\" returns successfully" Nov 7 23:58:29.027509 kubelet[2728]: E1107 23:58:29.027368 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:29.441083 systemd[1]: cri-containerd-4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e.scope: Deactivated successfully. Nov 7 23:58:29.442242 systemd[1]: cri-containerd-4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e.scope: Consumed 518ms CPU time, 170.7M memory peak, 2.6M read from disk, 165.9M written to disk. Nov 7 23:58:29.443769 containerd[1559]: time="2025-11-07T23:58:29.443688331Z" level=info msg="received container exit event container_id:\"4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e\" id:\"4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e\" pid:3510 exited_at:{seconds:1762559909 nanos:443433292}" Nov 7 23:58:29.467565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b1c3bc585e8bffa27a8b9b50db91a9ea9b30f3f630d36a193fd87e77ea5246e-rootfs.mount: Deactivated successfully. Nov 7 23:58:29.519617 kubelet[2728]: I1107 23:58:29.519579 2728 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 7 23:58:29.717256 kubelet[2728]: I1107 23:58:29.716884 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cq2p\" (UniqueName: \"kubernetes.io/projected/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-kube-api-access-4cq2p\") pod \"whisker-5bcc8df89b-zvwlv\" (UID: \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\") " pod="calico-system/whisker-5bcc8df89b-zvwlv" Nov 7 23:58:29.717256 kubelet[2728]: I1107 23:58:29.717042 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-backend-key-pair\") pod \"whisker-5bcc8df89b-zvwlv\" (UID: \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\") " pod="calico-system/whisker-5bcc8df89b-zvwlv" Nov 7 23:58:29.717256 kubelet[2728]: I1107 23:58:29.717070 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/976ce995-eb57-4c78-bcd8-fe6b36d7dd8e-calico-apiserver-certs\") pod \"calico-apiserver-676df99ff5-lml9g\" (UID: \"976ce995-eb57-4c78-bcd8-fe6b36d7dd8e\") " pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" Nov 7 23:58:29.717256 kubelet[2728]: I1107 23:58:29.717092 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-ca-bundle\") pod \"whisker-5bcc8df89b-zvwlv\" (UID: \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\") " pod="calico-system/whisker-5bcc8df89b-zvwlv" Nov 7 23:58:29.722601 systemd[1]: Created slice kubepods-besteffort-pod8db3bad8_681a_46cd_9b22_8dd5f3763a3c.slice - libcontainer container kubepods-besteffort-pod8db3bad8_681a_46cd_9b22_8dd5f3763a3c.slice. Nov 7 23:58:29.737404 systemd[1]: Created slice kubepods-besteffort-pod976ce995_eb57_4c78_bcd8_fe6b36d7dd8e.slice - libcontainer container kubepods-besteffort-pod976ce995_eb57_4c78_bcd8_fe6b36d7dd8e.slice. Nov 7 23:58:29.750015 systemd[1]: Created slice kubepods-besteffort-podda1ff15e_acf5_415a_98c1_50e005ef7778.slice - libcontainer container kubepods-besteffort-podda1ff15e_acf5_415a_98c1_50e005ef7778.slice. Nov 7 23:58:29.763122 systemd[1]: Created slice kubepods-burstable-pod0f5caead_ec81_47ca_97c7_f88bc4e0d10c.slice - libcontainer container kubepods-burstable-pod0f5caead_ec81_47ca_97c7_f88bc4e0d10c.slice. Nov 7 23:58:29.777413 systemd[1]: Created slice kubepods-burstable-pod454b412a_04e6_4e0b_a20f_e2ceec9ccb01.slice - libcontainer container kubepods-burstable-pod454b412a_04e6_4e0b_a20f_e2ceec9ccb01.slice. Nov 7 23:58:29.786185 systemd[1]: Created slice kubepods-besteffort-pod1892480e_9728_4d8f_8844_1e28e6326f1c.slice - libcontainer container kubepods-besteffort-pod1892480e_9728_4d8f_8844_1e28e6326f1c.slice. Nov 7 23:58:29.798933 systemd[1]: Created slice kubepods-besteffort-poda4f3eb8b_111d_48cd_8798_ab0004f10d75.slice - libcontainer container kubepods-besteffort-poda4f3eb8b_111d_48cd_8798_ab0004f10d75.slice. Nov 7 23:58:29.818318 kubelet[2728]: I1107 23:58:29.818271 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6wkt\" (UniqueName: \"kubernetes.io/projected/976ce995-eb57-4c78-bcd8-fe6b36d7dd8e-kube-api-access-s6wkt\") pod \"calico-apiserver-676df99ff5-lml9g\" (UID: \"976ce995-eb57-4c78-bcd8-fe6b36d7dd8e\") " pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" Nov 7 23:58:29.818476 kubelet[2728]: I1107 23:58:29.818411 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js879\" (UniqueName: \"kubernetes.io/projected/da1ff15e-acf5-415a-98c1-50e005ef7778-kube-api-access-js879\") pod \"goldmane-666569f655-c265d\" (UID: \"da1ff15e-acf5-415a-98c1-50e005ef7778\") " pod="calico-system/goldmane-666569f655-c265d" Nov 7 23:58:29.818476 kubelet[2728]: I1107 23:58:29.818435 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4f3eb8b-111d-48cd-8798-ab0004f10d75-tigera-ca-bundle\") pod \"calico-kube-controllers-7c8b6c5fd5-s4pn6\" (UID: \"a4f3eb8b-111d-48cd-8798-ab0004f10d75\") " pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" Nov 7 23:58:29.818476 kubelet[2728]: I1107 23:58:29.818451 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ssd\" (UniqueName: \"kubernetes.io/projected/454b412a-04e6-4e0b-a20f-e2ceec9ccb01-kube-api-access-54ssd\") pod \"coredns-674b8bbfcf-bpzjc\" (UID: \"454b412a-04e6-4e0b-a20f-e2ceec9ccb01\") " pod="kube-system/coredns-674b8bbfcf-bpzjc" Nov 7 23:58:29.818585 kubelet[2728]: I1107 23:58:29.818501 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/da1ff15e-acf5-415a-98c1-50e005ef7778-goldmane-key-pair\") pod \"goldmane-666569f655-c265d\" (UID: \"da1ff15e-acf5-415a-98c1-50e005ef7778\") " pod="calico-system/goldmane-666569f655-c265d" Nov 7 23:58:29.818585 kubelet[2728]: I1107 23:58:29.818523 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1892480e-9728-4d8f-8844-1e28e6326f1c-calico-apiserver-certs\") pod \"calico-apiserver-676df99ff5-4k992\" (UID: \"1892480e-9728-4d8f-8844-1e28e6326f1c\") " pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" Nov 7 23:58:29.818585 kubelet[2728]: I1107 23:58:29.818548 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f5caead-ec81-47ca-97c7-f88bc4e0d10c-config-volume\") pod \"coredns-674b8bbfcf-hhp66\" (UID: \"0f5caead-ec81-47ca-97c7-f88bc4e0d10c\") " pod="kube-system/coredns-674b8bbfcf-hhp66" Nov 7 23:58:29.818655 kubelet[2728]: I1107 23:58:29.818592 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/454b412a-04e6-4e0b-a20f-e2ceec9ccb01-config-volume\") pod \"coredns-674b8bbfcf-bpzjc\" (UID: \"454b412a-04e6-4e0b-a20f-e2ceec9ccb01\") " pod="kube-system/coredns-674b8bbfcf-bpzjc" Nov 7 23:58:29.818655 kubelet[2728]: I1107 23:58:29.818612 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvz4\" (UniqueName: \"kubernetes.io/projected/1892480e-9728-4d8f-8844-1e28e6326f1c-kube-api-access-9mvz4\") pod \"calico-apiserver-676df99ff5-4k992\" (UID: \"1892480e-9728-4d8f-8844-1e28e6326f1c\") " pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" Nov 7 23:58:29.818655 kubelet[2728]: I1107 23:58:29.818626 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ncl7\" (UniqueName: \"kubernetes.io/projected/0f5caead-ec81-47ca-97c7-f88bc4e0d10c-kube-api-access-5ncl7\") pod \"coredns-674b8bbfcf-hhp66\" (UID: \"0f5caead-ec81-47ca-97c7-f88bc4e0d10c\") " pod="kube-system/coredns-674b8bbfcf-hhp66" Nov 7 23:58:29.818655 kubelet[2728]: I1107 23:58:29.818647 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da1ff15e-acf5-415a-98c1-50e005ef7778-config\") pod \"goldmane-666569f655-c265d\" (UID: \"da1ff15e-acf5-415a-98c1-50e005ef7778\") " pod="calico-system/goldmane-666569f655-c265d" Nov 7 23:58:29.818743 kubelet[2728]: I1107 23:58:29.818664 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da1ff15e-acf5-415a-98c1-50e005ef7778-goldmane-ca-bundle\") pod \"goldmane-666569f655-c265d\" (UID: \"da1ff15e-acf5-415a-98c1-50e005ef7778\") " pod="calico-system/goldmane-666569f655-c265d" Nov 7 23:58:29.818743 kubelet[2728]: I1107 23:58:29.818681 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdf6\" (UniqueName: \"kubernetes.io/projected/a4f3eb8b-111d-48cd-8798-ab0004f10d75-kube-api-access-rbdf6\") pod \"calico-kube-controllers-7c8b6c5fd5-s4pn6\" (UID: \"a4f3eb8b-111d-48cd-8798-ab0004f10d75\") " pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" Nov 7 23:58:29.888567 systemd[1]: Created slice kubepods-besteffort-pod59a419f6_34bd_4030_8aca_5c108260b7ed.slice - libcontainer container kubepods-besteffort-pod59a419f6_34bd_4030_8aca_5c108260b7ed.slice. Nov 7 23:58:29.891406 containerd[1559]: time="2025-11-07T23:58:29.891337090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwgj8,Uid:59a419f6-34bd-4030-8aca-5c108260b7ed,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:30.009984 containerd[1559]: time="2025-11-07T23:58:30.009674834Z" level=error msg="Failed to destroy network for sandbox \"22fda1cc12a3e5a0ed5ae4a8cda8fdfa55138535d46e62cf2dd655c757d8be4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.011331 containerd[1559]: time="2025-11-07T23:58:30.011181228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwgj8,Uid:59a419f6-34bd-4030-8aca-5c108260b7ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22fda1cc12a3e5a0ed5ae4a8cda8fdfa55138535d46e62cf2dd655c757d8be4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.014796 kubelet[2728]: E1107 23:58:30.014704 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22fda1cc12a3e5a0ed5ae4a8cda8fdfa55138535d46e62cf2dd655c757d8be4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.015015 kubelet[2728]: E1107 23:58:30.014840 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22fda1cc12a3e5a0ed5ae4a8cda8fdfa55138535d46e62cf2dd655c757d8be4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zwgj8" Nov 7 23:58:30.015015 kubelet[2728]: E1107 23:58:30.014866 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22fda1cc12a3e5a0ed5ae4a8cda8fdfa55138535d46e62cf2dd655c757d8be4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zwgj8" Nov 7 23:58:30.015015 kubelet[2728]: E1107 23:58:30.014939 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22fda1cc12a3e5a0ed5ae4a8cda8fdfa55138535d46e62cf2dd655c757d8be4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:30.035235 kubelet[2728]: E1107 23:58:30.035176 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:30.036941 containerd[1559]: time="2025-11-07T23:58:30.036780008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 7 23:58:30.040573 containerd[1559]: time="2025-11-07T23:58:30.040516153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bcc8df89b-zvwlv,Uid:8db3bad8-681a-46cd-9b22-8dd5f3763a3c,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:30.042727 containerd[1559]: time="2025-11-07T23:58:30.042669104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-lml9g,Uid:976ce995-eb57-4c78-bcd8-fe6b36d7dd8e,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:58:30.057217 containerd[1559]: time="2025-11-07T23:58:30.057167367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c265d,Uid:da1ff15e-acf5-415a-98c1-50e005ef7778,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:30.069584 kubelet[2728]: E1107 23:58:30.069538 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:30.070441 containerd[1559]: time="2025-11-07T23:58:30.070168116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hhp66,Uid:0f5caead-ec81-47ca-97c7-f88bc4e0d10c,Namespace:kube-system,Attempt:0,}" Nov 7 23:58:30.084812 kubelet[2728]: E1107 23:58:30.084760 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:30.086726 containerd[1559]: time="2025-11-07T23:58:30.086670811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpzjc,Uid:454b412a-04e6-4e0b-a20f-e2ceec9ccb01,Namespace:kube-system,Attempt:0,}" Nov 7 23:58:30.095796 containerd[1559]: time="2025-11-07T23:58:30.095741575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-4k992,Uid:1892480e-9728-4d8f-8844-1e28e6326f1c,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:58:30.105185 containerd[1559]: time="2025-11-07T23:58:30.104381221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8b6c5fd5-s4pn6,Uid:a4f3eb8b-111d-48cd-8798-ab0004f10d75,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:30.147864 containerd[1559]: time="2025-11-07T23:58:30.147466491Z" level=error msg="Failed to destroy network for sandbox \"1d6411440d82024db91b04539389a6a11e59356fc52d15b42a2294dac8b04261\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.157755 containerd[1559]: time="2025-11-07T23:58:30.157649891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-lml9g,Uid:976ce995-eb57-4c78-bcd8-fe6b36d7dd8e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6411440d82024db91b04539389a6a11e59356fc52d15b42a2294dac8b04261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.158037 kubelet[2728]: E1107 23:58:30.157955 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6411440d82024db91b04539389a6a11e59356fc52d15b42a2294dac8b04261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.158089 kubelet[2728]: E1107 23:58:30.158044 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6411440d82024db91b04539389a6a11e59356fc52d15b42a2294dac8b04261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" Nov 7 23:58:30.158089 kubelet[2728]: E1107 23:58:30.158073 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6411440d82024db91b04539389a6a11e59356fc52d15b42a2294dac8b04261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" Nov 7 23:58:30.158200 kubelet[2728]: E1107 23:58:30.158153 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676df99ff5-lml9g_calico-apiserver(976ce995-eb57-4c78-bcd8-fe6b36d7dd8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676df99ff5-lml9g_calico-apiserver(976ce995-eb57-4c78-bcd8-fe6b36d7dd8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d6411440d82024db91b04539389a6a11e59356fc52d15b42a2294dac8b04261\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" podUID="976ce995-eb57-4c78-bcd8-fe6b36d7dd8e" Nov 7 23:58:30.161717 containerd[1559]: time="2025-11-07T23:58:30.161660236Z" level=error msg="Failed to destroy network for sandbox \"ccfb5b6a2aa6d177ec79c9572d82637fbd37b8c995355b42ecbe6ea92b68e705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.166102 containerd[1559]: time="2025-11-07T23:58:30.166038778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c265d,Uid:da1ff15e-acf5-415a-98c1-50e005ef7778,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfb5b6a2aa6d177ec79c9572d82637fbd37b8c995355b42ecbe6ea92b68e705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.166755 kubelet[2728]: E1107 23:58:30.166710 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfb5b6a2aa6d177ec79c9572d82637fbd37b8c995355b42ecbe6ea92b68e705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.166887 kubelet[2728]: E1107 23:58:30.166799 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfb5b6a2aa6d177ec79c9572d82637fbd37b8c995355b42ecbe6ea92b68e705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c265d" Nov 7 23:58:30.166887 kubelet[2728]: E1107 23:58:30.166827 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccfb5b6a2aa6d177ec79c9572d82637fbd37b8c995355b42ecbe6ea92b68e705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c265d" Nov 7 23:58:30.166976 kubelet[2728]: E1107 23:58:30.166885 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-c265d_calico-system(da1ff15e-acf5-415a-98c1-50e005ef7778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-c265d_calico-system(da1ff15e-acf5-415a-98c1-50e005ef7778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccfb5b6a2aa6d177ec79c9572d82637fbd37b8c995355b42ecbe6ea92b68e705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-c265d" podUID="da1ff15e-acf5-415a-98c1-50e005ef7778" Nov 7 23:58:30.167445 containerd[1559]: time="2025-11-07T23:58:30.167277373Z" level=error msg="Failed to destroy network for sandbox \"10fc9cfeb386bc90641ce4209b3070e12118e78915186e35e2a46c816c7659af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.174238 containerd[1559]: time="2025-11-07T23:58:30.174176266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bcc8df89b-zvwlv,Uid:8db3bad8-681a-46cd-9b22-8dd5f3763a3c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10fc9cfeb386bc90641ce4209b3070e12118e78915186e35e2a46c816c7659af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.174481 kubelet[2728]: E1107 23:58:30.174435 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10fc9cfeb386bc90641ce4209b3070e12118e78915186e35e2a46c816c7659af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.174898 kubelet[2728]: E1107 23:58:30.174505 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10fc9cfeb386bc90641ce4209b3070e12118e78915186e35e2a46c816c7659af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bcc8df89b-zvwlv" Nov 7 23:58:30.174898 kubelet[2728]: E1107 23:58:30.174543 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10fc9cfeb386bc90641ce4209b3070e12118e78915186e35e2a46c816c7659af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bcc8df89b-zvwlv" Nov 7 23:58:30.174898 kubelet[2728]: E1107 23:58:30.174628 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bcc8df89b-zvwlv_calico-system(8db3bad8-681a-46cd-9b22-8dd5f3763a3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bcc8df89b-zvwlv_calico-system(8db3bad8-681a-46cd-9b22-8dd5f3763a3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10fc9cfeb386bc90641ce4209b3070e12118e78915186e35e2a46c816c7659af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bcc8df89b-zvwlv" podUID="8db3bad8-681a-46cd-9b22-8dd5f3763a3c" Nov 7 23:58:30.192222 containerd[1559]: time="2025-11-07T23:58:30.192133315Z" level=error msg="Failed to destroy network for sandbox \"09b9d82b44ef9c4d1441b65605032528d1567b098e85984a9486dc236be10cca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.194248 containerd[1559]: time="2025-11-07T23:58:30.194174667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpzjc,Uid:454b412a-04e6-4e0b-a20f-e2ceec9ccb01,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b9d82b44ef9c4d1441b65605032528d1567b098e85984a9486dc236be10cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.194578 kubelet[2728]: E1107 23:58:30.194489 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b9d82b44ef9c4d1441b65605032528d1567b098e85984a9486dc236be10cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.194659 kubelet[2728]: E1107 23:58:30.194585 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b9d82b44ef9c4d1441b65605032528d1567b098e85984a9486dc236be10cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpzjc" Nov 7 23:58:30.194659 kubelet[2728]: E1107 23:58:30.194608 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b9d82b44ef9c4d1441b65605032528d1567b098e85984a9486dc236be10cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpzjc" Nov 7 23:58:30.194731 kubelet[2728]: E1107 23:58:30.194669 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bpzjc_kube-system(454b412a-04e6-4e0b-a20f-e2ceec9ccb01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bpzjc_kube-system(454b412a-04e6-4e0b-a20f-e2ceec9ccb01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09b9d82b44ef9c4d1441b65605032528d1567b098e85984a9486dc236be10cca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bpzjc" podUID="454b412a-04e6-4e0b-a20f-e2ceec9ccb01" Nov 7 23:58:30.197945 containerd[1559]: time="2025-11-07T23:58:30.197847533Z" level=error msg="Failed to destroy network for sandbox \"c5eaebfae33a66927df72c97684bdbbd427dc6bbb6e3eddeb81fa65e6c4f6859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.201294 containerd[1559]: time="2025-11-07T23:58:30.201024120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hhp66,Uid:0f5caead-ec81-47ca-97c7-f88bc4e0d10c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5eaebfae33a66927df72c97684bdbbd427dc6bbb6e3eddeb81fa65e6c4f6859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.203004 kubelet[2728]: E1107 23:58:30.201657 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5eaebfae33a66927df72c97684bdbbd427dc6bbb6e3eddeb81fa65e6c4f6859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.203004 kubelet[2728]: E1107 23:58:30.201745 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5eaebfae33a66927df72c97684bdbbd427dc6bbb6e3eddeb81fa65e6c4f6859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hhp66" Nov 7 23:58:30.203004 kubelet[2728]: E1107 23:58:30.201785 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5eaebfae33a66927df72c97684bdbbd427dc6bbb6e3eddeb81fa65e6c4f6859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hhp66" Nov 7 23:58:30.203209 kubelet[2728]: E1107 23:58:30.201845 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hhp66_kube-system(0f5caead-ec81-47ca-97c7-f88bc4e0d10c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hhp66_kube-system(0f5caead-ec81-47ca-97c7-f88bc4e0d10c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5eaebfae33a66927df72c97684bdbbd427dc6bbb6e3eddeb81fa65e6c4f6859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hhp66" podUID="0f5caead-ec81-47ca-97c7-f88bc4e0d10c" Nov 7 23:58:30.212130 containerd[1559]: time="2025-11-07T23:58:30.212057797Z" level=error msg="Failed to destroy network for sandbox \"2a6ed744b65f50e0deb2df55759bfef034249441bf665a6af414d5dd9c7aed07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.213955 containerd[1559]: time="2025-11-07T23:58:30.213900230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-4k992,Uid:1892480e-9728-4d8f-8844-1e28e6326f1c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6ed744b65f50e0deb2df55759bfef034249441bf665a6af414d5dd9c7aed07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.214212 kubelet[2728]: E1107 23:58:30.214167 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6ed744b65f50e0deb2df55759bfef034249441bf665a6af414d5dd9c7aed07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.214284 kubelet[2728]: E1107 23:58:30.214239 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6ed744b65f50e0deb2df55759bfef034249441bf665a6af414d5dd9c7aed07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" Nov 7 23:58:30.214284 kubelet[2728]: E1107 23:58:30.214271 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6ed744b65f50e0deb2df55759bfef034249441bf665a6af414d5dd9c7aed07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" Nov 7 23:58:30.214351 kubelet[2728]: E1107 23:58:30.214330 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-676df99ff5-4k992_calico-apiserver(1892480e-9728-4d8f-8844-1e28e6326f1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-676df99ff5-4k992_calico-apiserver(1892480e-9728-4d8f-8844-1e28e6326f1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a6ed744b65f50e0deb2df55759bfef034249441bf665a6af414d5dd9c7aed07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" podUID="1892480e-9728-4d8f-8844-1e28e6326f1c" Nov 7 23:58:30.217141 containerd[1559]: time="2025-11-07T23:58:30.217092057Z" level=error msg="Failed to destroy network for sandbox \"adb262f87a140d7859672169d50e7f664b0953c0aa9127deb9b41e0d0d23c8e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.218431 containerd[1559]: time="2025-11-07T23:58:30.218386212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8b6c5fd5-s4pn6,Uid:a4f3eb8b-111d-48cd-8798-ab0004f10d75,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb262f87a140d7859672169d50e7f664b0953c0aa9127deb9b41e0d0d23c8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.218703 kubelet[2728]: E1107 23:58:30.218656 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb262f87a140d7859672169d50e7f664b0953c0aa9127deb9b41e0d0d23c8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 7 23:58:30.218753 kubelet[2728]: E1107 23:58:30.218729 2728 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb262f87a140d7859672169d50e7f664b0953c0aa9127deb9b41e0d0d23c8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" Nov 7 23:58:30.218782 kubelet[2728]: E1107 23:58:30.218751 2728 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb262f87a140d7859672169d50e7f664b0953c0aa9127deb9b41e0d0d23c8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" Nov 7 23:58:30.218835 kubelet[2728]: E1107 23:58:30.218804 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c8b6c5fd5-s4pn6_calico-system(a4f3eb8b-111d-48cd-8798-ab0004f10d75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c8b6c5fd5-s4pn6_calico-system(a4f3eb8b-111d-48cd-8798-ab0004f10d75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adb262f87a140d7859672169d50e7f664b0953c0aa9127deb9b41e0d0d23c8e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" podUID="a4f3eb8b-111d-48cd-8798-ab0004f10d75" Nov 7 23:58:30.831160 systemd[1]: run-netns-cni\x2d9d4bf206\x2de7be\x2d9851\x2d9cc4\x2dc87d324b20f9.mount: Deactivated successfully. Nov 7 23:58:34.121303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484418931.mount: Deactivated successfully. Nov 7 23:58:34.561629 containerd[1559]: time="2025-11-07T23:58:34.561491338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:34.568161 containerd[1559]: time="2025-11-07T23:58:34.568055798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 7 23:58:34.569352 containerd[1559]: time="2025-11-07T23:58:34.569302874Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:34.573862 containerd[1559]: time="2025-11-07T23:58:34.573795220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 7 23:58:34.584847 containerd[1559]: time="2025-11-07T23:58:34.584788987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.54795794s" Nov 7 23:58:34.584847 containerd[1559]: time="2025-11-07T23:58:34.584838827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 7 23:58:34.601184 containerd[1559]: time="2025-11-07T23:58:34.600890218Z" level=info msg="CreateContainer within sandbox \"91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 7 23:58:34.613423 containerd[1559]: time="2025-11-07T23:58:34.613365660Z" level=info msg="Container 7a24f02fb5b4e5d9929d1ccf275a757dfc76d7594625388251aa9ae8fbea4fc9: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:34.623552 containerd[1559]: time="2025-11-07T23:58:34.623493349Z" level=info msg="CreateContainer within sandbox \"91f74e3dd0c845d46ccbb3cd7838c31dcea4d47ba820248a0a8a10d5f11be27a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a24f02fb5b4e5d9929d1ccf275a757dfc76d7594625388251aa9ae8fbea4fc9\"" Nov 7 23:58:34.624523 containerd[1559]: time="2025-11-07T23:58:34.624482946Z" level=info msg="StartContainer for \"7a24f02fb5b4e5d9929d1ccf275a757dfc76d7594625388251aa9ae8fbea4fc9\"" Nov 7 23:58:34.626414 containerd[1559]: time="2025-11-07T23:58:34.626370220Z" level=info msg="connecting to shim 7a24f02fb5b4e5d9929d1ccf275a757dfc76d7594625388251aa9ae8fbea4fc9" address="unix:///run/containerd/s/dbfe7450123b9c09eeefe8ab4eab202538c3c6d6bffa585dd4eaffe15cd5b2aa" protocol=ttrpc version=3 Nov 7 23:58:34.647344 systemd[1]: Started cri-containerd-7a24f02fb5b4e5d9929d1ccf275a757dfc76d7594625388251aa9ae8fbea4fc9.scope - libcontainer container 7a24f02fb5b4e5d9929d1ccf275a757dfc76d7594625388251aa9ae8fbea4fc9. Nov 7 23:58:34.727266 containerd[1559]: time="2025-11-07T23:58:34.727227033Z" level=info msg="StartContainer for \"7a24f02fb5b4e5d9929d1ccf275a757dfc76d7594625388251aa9ae8fbea4fc9\" returns successfully" Nov 7 23:58:34.864727 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 7 23:58:34.864848 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 7 23:58:35.052488 kubelet[2728]: I1107 23:58:35.052434 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cq2p\" (UniqueName: \"kubernetes.io/projected/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-kube-api-access-4cq2p\") pod \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\" (UID: \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\") " Nov 7 23:58:35.052907 kubelet[2728]: I1107 23:58:35.052555 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-ca-bundle\") pod \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\" (UID: \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\") " Nov 7 23:58:35.052907 kubelet[2728]: I1107 23:58:35.052578 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-backend-key-pair\") pod \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\" (UID: \"8db3bad8-681a-46cd-9b22-8dd5f3763a3c\") " Nov 7 23:58:35.057824 kubelet[2728]: E1107 23:58:35.057400 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:35.068285 kubelet[2728]: I1107 23:58:35.068242 2728 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8db3bad8-681a-46cd-9b22-8dd5f3763a3c" (UID: "8db3bad8-681a-46cd-9b22-8dd5f3763a3c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 7 23:58:35.069087 kubelet[2728]: I1107 23:58:35.069039 2728 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-kube-api-access-4cq2p" (OuterVolumeSpecName: "kube-api-access-4cq2p") pod "8db3bad8-681a-46cd-9b22-8dd5f3763a3c" (UID: "8db3bad8-681a-46cd-9b22-8dd5f3763a3c"). InnerVolumeSpecName "kube-api-access-4cq2p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 7 23:58:35.075784 kubelet[2728]: I1107 23:58:35.075733 2728 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8db3bad8-681a-46cd-9b22-8dd5f3763a3c" (UID: "8db3bad8-681a-46cd-9b22-8dd5f3763a3c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 7 23:58:35.092565 kubelet[2728]: I1107 23:58:35.092171 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vqj29" podStartSLOduration=1.7729750420000001 podStartE2EDuration="14.09212854s" podCreationTimestamp="2025-11-07 23:58:21 +0000 UTC" firstStartedPulling="2025-11-07 23:58:22.266530846 +0000 UTC m=+25.604833205" lastFinishedPulling="2025-11-07 23:58:34.585684344 +0000 UTC m=+37.923986703" observedRunningTime="2025-11-07 23:58:35.090665024 +0000 UTC m=+38.428967383" watchObservedRunningTime="2025-11-07 23:58:35.09212854 +0000 UTC m=+38.430430899" Nov 7 23:58:35.123972 systemd[1]: var-lib-kubelet-pods-8db3bad8\x2d681a\x2d46cd\x2d9b22\x2d8dd5f3763a3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4cq2p.mount: Deactivated successfully. Nov 7 23:58:35.124095 systemd[1]: var-lib-kubelet-pods-8db3bad8\x2d681a\x2d46cd\x2d9b22\x2d8dd5f3763a3c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 7 23:58:35.153912 kubelet[2728]: I1107 23:58:35.153870 2728 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4cq2p\" (UniqueName: \"kubernetes.io/projected/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-kube-api-access-4cq2p\") on node \"localhost\" DevicePath \"\"" Nov 7 23:58:35.153912 kubelet[2728]: I1107 23:58:35.153908 2728 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 7 23:58:35.153912 kubelet[2728]: I1107 23:58:35.153917 2728 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8db3bad8-681a-46cd-9b22-8dd5f3763a3c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 7 23:58:35.363405 systemd[1]: Removed slice kubepods-besteffort-pod8db3bad8_681a_46cd_9b22_8dd5f3763a3c.slice - libcontainer container kubepods-besteffort-pod8db3bad8_681a_46cd_9b22_8dd5f3763a3c.slice. Nov 7 23:58:35.426017 systemd[1]: Created slice kubepods-besteffort-pod1590c298_7505_47ec_a11a_671c25a2d127.slice - libcontainer container kubepods-besteffort-pod1590c298_7505_47ec_a11a_671c25a2d127.slice. Nov 7 23:58:35.555958 kubelet[2728]: I1107 23:58:35.555897 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4m6q\" (UniqueName: \"kubernetes.io/projected/1590c298-7505-47ec-a11a-671c25a2d127-kube-api-access-j4m6q\") pod \"whisker-85d955cb58-r6ltp\" (UID: \"1590c298-7505-47ec-a11a-671c25a2d127\") " pod="calico-system/whisker-85d955cb58-r6ltp" Nov 7 23:58:35.555958 kubelet[2728]: I1107 23:58:35.555954 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1590c298-7505-47ec-a11a-671c25a2d127-whisker-backend-key-pair\") pod \"whisker-85d955cb58-r6ltp\" (UID: \"1590c298-7505-47ec-a11a-671c25a2d127\") " pod="calico-system/whisker-85d955cb58-r6ltp" Nov 7 23:58:35.556120 kubelet[2728]: I1107 23:58:35.555976 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1590c298-7505-47ec-a11a-671c25a2d127-whisker-ca-bundle\") pod \"whisker-85d955cb58-r6ltp\" (UID: \"1590c298-7505-47ec-a11a-671c25a2d127\") " pod="calico-system/whisker-85d955cb58-r6ltp" Nov 7 23:58:35.729534 containerd[1559]: time="2025-11-07T23:58:35.729416601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85d955cb58-r6ltp,Uid:1590c298-7505-47ec-a11a-671c25a2d127,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:35.915681 systemd-networkd[1474]: cali64acf580d90: Link UP Nov 7 23:58:35.915895 systemd-networkd[1474]: cali64acf580d90: Gained carrier Nov 7 23:58:35.934628 containerd[1559]: 2025-11-07 23:58:35.763 [INFO][3914] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:35.934628 containerd[1559]: 2025-11-07 23:58:35.796 [INFO][3914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85d955cb58--r6ltp-eth0 whisker-85d955cb58- calico-system 1590c298-7505-47ec-a11a-671c25a2d127 923 0 2025-11-07 23:58:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85d955cb58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85d955cb58-r6ltp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali64acf580d90 [] [] }} ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-" Nov 7 23:58:35.934628 containerd[1559]: 2025-11-07 23:58:35.796 [INFO][3914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" Nov 7 23:58:35.934628 containerd[1559]: 2025-11-07 23:58:35.863 [INFO][3928] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" HandleID="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Workload="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.863 [INFO][3928] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" HandleID="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Workload="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85d955cb58-r6ltp", "timestamp":"2025-11-07 23:58:35.863626738 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.863 [INFO][3928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.863 [INFO][3928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.864 [INFO][3928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.874 [INFO][3928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" host="localhost" Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.881 [INFO][3928] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.885 [INFO][3928] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.887 [INFO][3928] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.890 [INFO][3928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:35.935044 containerd[1559]: 2025-11-07 23:58:35.890 [INFO][3928] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" host="localhost" Nov 7 23:58:35.935253 containerd[1559]: 2025-11-07 23:58:35.892 [INFO][3928] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21 Nov 7 23:58:35.935253 containerd[1559]: 2025-11-07 23:58:35.898 [INFO][3928] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" host="localhost" Nov 7 23:58:35.935253 containerd[1559]: 2025-11-07 23:58:35.903 [INFO][3928] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" host="localhost" Nov 7 23:58:35.935253 containerd[1559]: 2025-11-07 23:58:35.903 [INFO][3928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" host="localhost" Nov 7 23:58:35.935253 containerd[1559]: 2025-11-07 23:58:35.904 [INFO][3928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:35.935253 containerd[1559]: 2025-11-07 23:58:35.904 [INFO][3928] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" HandleID="k8s-pod-network.8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Workload="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" Nov 7 23:58:35.935357 containerd[1559]: 2025-11-07 23:58:35.906 [INFO][3914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85d955cb58--r6ltp-eth0", GenerateName:"whisker-85d955cb58-", Namespace:"calico-system", SelfLink:"", UID:"1590c298-7505-47ec-a11a-671c25a2d127", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85d955cb58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85d955cb58-r6ltp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64acf580d90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:35.935357 containerd[1559]: 2025-11-07 23:58:35.906 [INFO][3914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" Nov 7 23:58:35.935426 containerd[1559]: 2025-11-07 23:58:35.906 [INFO][3914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64acf580d90 ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" Nov 7 23:58:35.935426 containerd[1559]: 2025-11-07 23:58:35.916 [INFO][3914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" Nov 7 23:58:35.935466 containerd[1559]: 2025-11-07 23:58:35.916 [INFO][3914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85d955cb58--r6ltp-eth0", GenerateName:"whisker-85d955cb58-", Namespace:"calico-system", SelfLink:"", UID:"1590c298-7505-47ec-a11a-671c25a2d127", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85d955cb58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21", Pod:"whisker-85d955cb58-r6ltp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali64acf580d90", MAC:"4a:3f:60:9e:d4:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:35.935520 containerd[1559]: 2025-11-07 23:58:35.932 [INFO][3914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" Namespace="calico-system" Pod="whisker-85d955cb58-r6ltp" WorkloadEndpoint="localhost-k8s-whisker--85d955cb58--r6ltp-eth0" Nov 7 23:58:35.988482 containerd[1559]: time="2025-11-07T23:58:35.986776067Z" level=info msg="connecting to shim 8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21" address="unix:///run/containerd/s/0afd400913956c50517d9566862dff26701d4b23877d4b4c928e4b431cadca03" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:36.012376 systemd[1]: Started cri-containerd-8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21.scope - libcontainer container 8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21. Nov 7 23:58:36.024155 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:36.046105 containerd[1559]: time="2025-11-07T23:58:36.046062626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85d955cb58-r6ltp,Uid:1590c298-7505-47ec-a11a-671c25a2d127,Namespace:calico-system,Attempt:0,} returns sandbox id \"8267ed3105000e41b53b1c9558f0bd92da0daba3e3284db216d75590cc583a21\"" Nov 7 23:58:36.049316 containerd[1559]: time="2025-11-07T23:58:36.049280577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 7 23:58:36.059311 kubelet[2728]: E1107 23:58:36.059256 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:36.269577 containerd[1559]: time="2025-11-07T23:58:36.269432508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:36.271866 containerd[1559]: time="2025-11-07T23:58:36.271800222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 7 23:58:36.272234 containerd[1559]: time="2025-11-07T23:58:36.271908702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 7 23:58:36.278182 kubelet[2728]: E1107 23:58:36.277381 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:58:36.279075 kubelet[2728]: E1107 23:58:36.279001 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:58:36.282453 kubelet[2728]: E1107 23:58:36.282385 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc1b59121ea94ac4b69e69083cbd3f64,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4m6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d955cb58-r6ltp_calico-system(1590c298-7505-47ec-a11a-671c25a2d127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:36.286620 containerd[1559]: time="2025-11-07T23:58:36.286538622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 7 23:58:36.531247 containerd[1559]: time="2025-11-07T23:58:36.530940009Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:36.574265 containerd[1559]: time="2025-11-07T23:58:36.574175893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 7 23:58:36.574407 containerd[1559]: time="2025-11-07T23:58:36.574200933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 7 23:58:36.574782 kubelet[2728]: E1107 23:58:36.574710 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:58:36.574782 kubelet[2728]: E1107 23:58:36.574768 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:58:36.574936 kubelet[2728]: E1107 23:58:36.574887 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4m6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d955cb58-r6ltp_calico-system(1590c298-7505-47ec-a11a-671c25a2d127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:36.576953 kubelet[2728]: E1107 23:58:36.576898 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d955cb58-r6ltp" podUID="1590c298-7505-47ec-a11a-671c25a2d127" Nov 7 23:58:36.876270 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:50274.service - OpenSSH per-connection server daemon (10.0.0.1:50274). Nov 7 23:58:36.880172 kubelet[2728]: I1107 23:58:36.880092 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db3bad8-681a-46cd-9b22-8dd5f3763a3c" path="/var/lib/kubelet/pods/8db3bad8-681a-46cd-9b22-8dd5f3763a3c/volumes" Nov 7 23:58:36.946110 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 50274 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:58:36.947813 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:58:36.952395 systemd-logind[1536]: New session 8 of user core. Nov 7 23:58:36.967396 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 7 23:58:37.062336 kubelet[2728]: E1107 23:58:37.061335 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:37.064158 kubelet[2728]: E1107 23:58:37.063065 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d955cb58-r6ltp" podUID="1590c298-7505-47ec-a11a-671c25a2d127" Nov 7 23:58:37.179844 sshd[4114]: Connection closed by 10.0.0.1 port 50274 Nov 7 23:58:37.180114 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:37.184386 systemd-logind[1536]: Session 8 logged out. Waiting for processes to exit. Nov 7 23:58:37.184667 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:50274.service: Deactivated successfully. Nov 7 23:58:37.187722 systemd[1]: session-8.scope: Deactivated successfully. Nov 7 23:58:37.189677 systemd-logind[1536]: Removed session 8. Nov 7 23:58:37.199266 systemd-networkd[1474]: cali64acf580d90: Gained IPv6LL Nov 7 23:58:40.879072 containerd[1559]: time="2025-11-07T23:58:40.879024993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwgj8,Uid:59a419f6-34bd-4030-8aca-5c108260b7ed,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:40.995986 systemd-networkd[1474]: calied3bade001a: Link UP Nov 7 23:58:40.998251 systemd-networkd[1474]: calied3bade001a: Gained carrier Nov 7 23:58:41.015298 containerd[1559]: 2025-11-07 23:58:40.907 [INFO][4255] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:41.015298 containerd[1559]: 2025-11-07 23:58:40.922 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zwgj8-eth0 csi-node-driver- calico-system 59a419f6-34bd-4030-8aca-5c108260b7ed 744 0 2025-11-07 23:58:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zwgj8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calied3bade001a [] [] }} ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-" Nov 7 23:58:41.015298 containerd[1559]: 2025-11-07 23:58:40.922 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-eth0" Nov 7 23:58:41.015298 containerd[1559]: 2025-11-07 23:58:40.949 [INFO][4269] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" HandleID="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Workload="localhost-k8s-csi--node--driver--zwgj8-eth0" Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.949 [INFO][4269] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" HandleID="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Workload="localhost-k8s-csi--node--driver--zwgj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zwgj8", "timestamp":"2025-11-07 23:58:40.949023288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.949 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.949 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.949 [INFO][4269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.959 [INFO][4269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" host="localhost" Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.965 [INFO][4269] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.970 [INFO][4269] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.972 [INFO][4269] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.976 [INFO][4269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:41.015582 containerd[1559]: 2025-11-07 23:58:40.976 [INFO][4269] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" host="localhost" Nov 7 23:58:41.015828 containerd[1559]: 2025-11-07 23:58:40.977 [INFO][4269] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5 Nov 7 23:58:41.015828 containerd[1559]: 2025-11-07 23:58:40.982 [INFO][4269] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" host="localhost" Nov 7 23:58:41.015828 containerd[1559]: 2025-11-07 23:58:40.988 [INFO][4269] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" host="localhost" Nov 7 23:58:41.015828 containerd[1559]: 2025-11-07 23:58:40.988 [INFO][4269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" host="localhost" Nov 7 23:58:41.015828 containerd[1559]: 2025-11-07 23:58:40.988 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:41.015828 containerd[1559]: 2025-11-07 23:58:40.988 [INFO][4269] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" HandleID="k8s-pod-network.1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Workload="localhost-k8s-csi--node--driver--zwgj8-eth0" Nov 7 23:58:41.015975 containerd[1559]: 2025-11-07 23:58:40.993 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zwgj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"59a419f6-34bd-4030-8aca-5c108260b7ed", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zwgj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied3bade001a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:41.016045 containerd[1559]: 2025-11-07 23:58:40.993 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-eth0" Nov 7 23:58:41.016045 containerd[1559]: 2025-11-07 23:58:40.993 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied3bade001a ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-eth0" Nov 7 23:58:41.016045 containerd[1559]: 2025-11-07 23:58:40.997 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-eth0" Nov 7 23:58:41.016121 containerd[1559]: 2025-11-07 23:58:41.000 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zwgj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"59a419f6-34bd-4030-8aca-5c108260b7ed", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5", Pod:"csi-node-driver-zwgj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied3bade001a", MAC:"7e:98:fd:a6:ec:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:41.016191 containerd[1559]: 2025-11-07 23:58:41.011 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" Namespace="calico-system" Pod="csi-node-driver-zwgj8" WorkloadEndpoint="localhost-k8s-csi--node--driver--zwgj8-eth0" Nov 7 23:58:41.035332 containerd[1559]: time="2025-11-07T23:58:41.035281114Z" level=info msg="connecting to shim 1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5" address="unix:///run/containerd/s/434a1c01b0058f760bbe50c5cfde7fdb46094fa733fec6e30761fcba8ba6fc53" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:41.062361 systemd[1]: Started cri-containerd-1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5.scope - libcontainer container 1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5. Nov 7 23:58:41.073710 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:41.086267 containerd[1559]: time="2025-11-07T23:58:41.086226255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwgj8,Uid:59a419f6-34bd-4030-8aca-5c108260b7ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"1745aa92de4d9fab41badd6c622a2bec09811d78fcb181e2ceaaadcef0996cc5\"" Nov 7 23:58:41.087919 containerd[1559]: time="2025-11-07T23:58:41.087885252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 7 23:58:41.286054 containerd[1559]: time="2025-11-07T23:58:41.285744709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:41.288315 containerd[1559]: time="2025-11-07T23:58:41.288264784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 7 23:58:41.288636 containerd[1559]: time="2025-11-07T23:58:41.288363144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 7 23:58:41.288681 kubelet[2728]: E1107 23:58:41.288530 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:58:41.288681 kubelet[2728]: E1107 23:58:41.288591 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:58:41.289377 kubelet[2728]: E1107 23:58:41.288715 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:41.291440 containerd[1559]: time="2025-11-07T23:58:41.291400698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 7 23:58:41.507373 containerd[1559]: time="2025-11-07T23:58:41.507237800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:41.584318 containerd[1559]: time="2025-11-07T23:58:41.584245170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 7 23:58:41.587540 containerd[1559]: time="2025-11-07T23:58:41.584594570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 7 23:58:41.587874 kubelet[2728]: E1107 23:58:41.587829 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:58:41.588376 kubelet[2728]: E1107 23:58:41.587964 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:58:41.588376 kubelet[2728]: E1107 23:58:41.588096 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:41.590170 kubelet[2728]: E1107 23:58:41.589525 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:42.076184 kubelet[2728]: E1107 23:58:42.076067 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:42.195642 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:41020.service - OpenSSH per-connection server daemon (10.0.0.1:41020). Nov 7 23:58:42.251988 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 41020 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:58:42.252849 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:58:42.257200 systemd-logind[1536]: New session 9 of user core. Nov 7 23:58:42.268355 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 7 23:58:42.397880 sshd[4361]: Connection closed by 10.0.0.1 port 41020 Nov 7 23:58:42.398316 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:42.402270 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:41020.service: Deactivated successfully. Nov 7 23:58:42.404823 systemd[1]: session-9.scope: Deactivated successfully. Nov 7 23:58:42.405542 systemd-logind[1536]: Session 9 logged out. Waiting for processes to exit. Nov 7 23:58:42.406779 systemd-logind[1536]: Removed session 9. Nov 7 23:58:42.832226 systemd-networkd[1474]: calied3bade001a: Gained IPv6LL Nov 7 23:58:43.077911 kubelet[2728]: E1107 23:58:43.077840 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:43.878201 kubelet[2728]: E1107 23:58:43.877992 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:43.878650 containerd[1559]: time="2025-11-07T23:58:43.878598132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpzjc,Uid:454b412a-04e6-4e0b-a20f-e2ceec9ccb01,Namespace:kube-system,Attempt:0,}" Nov 7 23:58:43.878938 containerd[1559]: time="2025-11-07T23:58:43.878668652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c265d,Uid:da1ff15e-acf5-415a-98c1-50e005ef7778,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:43.878938 containerd[1559]: time="2025-11-07T23:58:43.878886252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-4k992,Uid:1892480e-9728-4d8f-8844-1e28e6326f1c,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:58:43.879592 containerd[1559]: time="2025-11-07T23:58:43.879049692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8b6c5fd5-s4pn6,Uid:a4f3eb8b-111d-48cd-8798-ab0004f10d75,Namespace:calico-system,Attempt:0,}" Nov 7 23:58:44.168705 systemd-networkd[1474]: cali67f256eabdc: Link UP Nov 7 23:58:44.168882 systemd-networkd[1474]: cali67f256eabdc: Gained carrier Nov 7 23:58:44.235953 containerd[1559]: 2025-11-07 23:58:43.996 [INFO][4449] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:44.235953 containerd[1559]: 2025-11-07 23:58:44.014 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0 calico-kube-controllers-7c8b6c5fd5- calico-system a4f3eb8b-111d-48cd-8798-ab0004f10d75 859 0 2025-11-07 23:58:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c8b6c5fd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c8b6c5fd5-s4pn6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali67f256eabdc [] [] }} ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-" Nov 7 23:58:44.235953 containerd[1559]: 2025-11-07 23:58:44.014 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" Nov 7 23:58:44.235953 containerd[1559]: 2025-11-07 23:58:44.056 [INFO][4480] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" HandleID="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Workload="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.056 [INFO][4480] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" HandleID="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Workload="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c8b6c5fd5-s4pn6", "timestamp":"2025-11-07 23:58:44.056236356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.056 [INFO][4480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.056 [INFO][4480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.056 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.072 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" host="localhost" Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.077 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.083 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.086 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.088 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.236304 containerd[1559]: 2025-11-07 23:58:44.088 [INFO][4480] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" host="localhost" Nov 7 23:58:44.236524 containerd[1559]: 2025-11-07 23:58:44.090 [INFO][4480] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94 Nov 7 23:58:44.236524 containerd[1559]: 2025-11-07 23:58:44.102 [INFO][4480] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" host="localhost" Nov 7 23:58:44.236524 containerd[1559]: 2025-11-07 23:58:44.162 [INFO][4480] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" host="localhost" Nov 7 23:58:44.236524 containerd[1559]: 2025-11-07 23:58:44.163 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" host="localhost" Nov 7 23:58:44.236524 containerd[1559]: 2025-11-07 23:58:44.163 [INFO][4480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:44.236524 containerd[1559]: 2025-11-07 23:58:44.163 [INFO][4480] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" HandleID="k8s-pod-network.e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Workload="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" Nov 7 23:58:44.236712 containerd[1559]: 2025-11-07 23:58:44.167 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0", GenerateName:"calico-kube-controllers-7c8b6c5fd5-", Namespace:"calico-system", SelfLink:"", UID:"a4f3eb8b-111d-48cd-8798-ab0004f10d75", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8b6c5fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c8b6c5fd5-s4pn6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67f256eabdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.236769 containerd[1559]: 2025-11-07 23:58:44.167 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" Nov 7 23:58:44.236769 containerd[1559]: 2025-11-07 23:58:44.167 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67f256eabdc ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" Nov 7 23:58:44.236769 containerd[1559]: 2025-11-07 23:58:44.168 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" Nov 7 23:58:44.236833 containerd[1559]: 2025-11-07 23:58:44.170 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0", GenerateName:"calico-kube-controllers-7c8b6c5fd5-", Namespace:"calico-system", SelfLink:"", UID:"a4f3eb8b-111d-48cd-8798-ab0004f10d75", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8b6c5fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94", Pod:"calico-kube-controllers-7c8b6c5fd5-s4pn6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67f256eabdc", MAC:"a6:0a:2c:02:e2:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.236877 containerd[1559]: 2025-11-07 23:58:44.234 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" Namespace="calico-system" Pod="calico-kube-controllers-7c8b6c5fd5-s4pn6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c8b6c5fd5--s4pn6-eth0" Nov 7 23:58:44.329754 systemd-networkd[1474]: cali023cee95153: Link UP Nov 7 23:58:44.330252 systemd-networkd[1474]: cali023cee95153: Gained carrier Nov 7 23:58:44.351028 containerd[1559]: 2025-11-07 23:58:43.982 [INFO][4422] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:44.351028 containerd[1559]: 2025-11-07 23:58:44.009 [INFO][4422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0 coredns-674b8bbfcf- kube-system 454b412a-04e6-4e0b-a20f-e2ceec9ccb01 856 0 2025-11-07 23:58:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-bpzjc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali023cee95153 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-" Nov 7 23:58:44.351028 containerd[1559]: 2025-11-07 23:58:44.009 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" Nov 7 23:58:44.351028 containerd[1559]: 2025-11-07 23:58:44.057 [INFO][4481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" HandleID="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Workload="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.058 [INFO][4481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" HandleID="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Workload="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-bpzjc", "timestamp":"2025-11-07 23:58:44.057938393 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.058 [INFO][4481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.163 [INFO][4481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.163 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.234 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" host="localhost" Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.244 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.249 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.251 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.253 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.351593 containerd[1559]: 2025-11-07 23:58:44.253 [INFO][4481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" host="localhost" Nov 7 23:58:44.351896 containerd[1559]: 2025-11-07 23:58:44.255 [INFO][4481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07 Nov 7 23:58:44.351896 containerd[1559]: 2025-11-07 23:58:44.297 [INFO][4481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" host="localhost" Nov 7 23:58:44.351896 containerd[1559]: 2025-11-07 23:58:44.322 [INFO][4481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" host="localhost" Nov 7 23:58:44.351896 containerd[1559]: 2025-11-07 23:58:44.322 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" host="localhost" Nov 7 23:58:44.351896 containerd[1559]: 2025-11-07 23:58:44.322 [INFO][4481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:44.351896 containerd[1559]: 2025-11-07 23:58:44.322 [INFO][4481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" HandleID="k8s-pod-network.8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Workload="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" Nov 7 23:58:44.352053 containerd[1559]: 2025-11-07 23:58:44.325 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"454b412a-04e6-4e0b-a20f-e2ceec9ccb01", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-bpzjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali023cee95153", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.352128 containerd[1559]: 2025-11-07 23:58:44.325 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" Nov 7 23:58:44.352128 containerd[1559]: 2025-11-07 23:58:44.325 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali023cee95153 ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" Nov 7 23:58:44.352128 containerd[1559]: 2025-11-07 23:58:44.330 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" Nov 7 23:58:44.353156 containerd[1559]: 2025-11-07 23:58:44.331 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"454b412a-04e6-4e0b-a20f-e2ceec9ccb01", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07", Pod:"coredns-674b8bbfcf-bpzjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali023cee95153", MAC:"fa:99:52:14:e1:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.353156 containerd[1559]: 2025-11-07 23:58:44.348 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpzjc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--bpzjc-eth0" Nov 7 23:58:44.388314 containerd[1559]: time="2025-11-07T23:58:44.388269106Z" level=info msg="connecting to shim e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94" address="unix:///run/containerd/s/fbef602faf28641063db73db9c80cf3605917feca400e7f709443c1fcfd94991" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:44.395493 systemd-networkd[1474]: calic2aed1999c6: Link UP Nov 7 23:58:44.396447 systemd-networkd[1474]: calic2aed1999c6: Gained carrier Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:43.999 [INFO][4434] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.027 [INFO][4434] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0 calico-apiserver-676df99ff5- calico-apiserver 1892480e-9728-4d8f-8844-1e28e6326f1c 858 0 2025-11-07 23:58:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:676df99ff5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-676df99ff5-4k992 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2aed1999c6 [] [] }} ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.027 [INFO][4434] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.070 [INFO][4492] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" HandleID="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Workload="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.070 [INFO][4492] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" HandleID="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Workload="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000516990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-676df99ff5-4k992", "timestamp":"2025-11-07 23:58:44.070664573 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.071 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.322 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.323 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.348 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.354 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.359 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.361 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.364 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.364 [INFO][4492] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.366 [INFO][4492] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573 Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.376 [INFO][4492] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.388 [INFO][4492] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.388 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" host="localhost" Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.388 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:44.419639 containerd[1559]: 2025-11-07 23:58:44.388 [INFO][4492] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" HandleID="k8s-pod-network.b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Workload="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" Nov 7 23:58:44.420169 containerd[1559]: 2025-11-07 23:58:44.392 [INFO][4434] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0", GenerateName:"calico-apiserver-676df99ff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1892480e-9728-4d8f-8844-1e28e6326f1c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676df99ff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-676df99ff5-4k992", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2aed1999c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.420169 containerd[1559]: 2025-11-07 23:58:44.392 [INFO][4434] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" Nov 7 23:58:44.420169 containerd[1559]: 2025-11-07 23:58:44.392 [INFO][4434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2aed1999c6 ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" Nov 7 23:58:44.420169 containerd[1559]: 2025-11-07 23:58:44.399 [INFO][4434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" Nov 7 23:58:44.420169 containerd[1559]: 2025-11-07 23:58:44.399 [INFO][4434] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0", GenerateName:"calico-apiserver-676df99ff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1892480e-9728-4d8f-8844-1e28e6326f1c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676df99ff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573", Pod:"calico-apiserver-676df99ff5-4k992", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2aed1999c6", MAC:"06:f3:ea:32:65:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.420169 containerd[1559]: 2025-11-07 23:58:44.417 [INFO][4434] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-4k992" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--4k992-eth0" Nov 7 23:58:44.421418 systemd[1]: Started cri-containerd-e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94.scope - libcontainer container e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94. Nov 7 23:58:44.436804 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:44.449385 containerd[1559]: time="2025-11-07T23:58:44.448837889Z" level=info msg="connecting to shim 8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07" address="unix:///run/containerd/s/469479b389a845f3d86f09eda54d83ff5744708df39cbcfdaec72d4c7e0196c1" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:44.471277 containerd[1559]: time="2025-11-07T23:58:44.471208213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8b6c5fd5-s4pn6,Uid:a4f3eb8b-111d-48cd-8798-ab0004f10d75,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8e9848313459e9142ac18baf36ef450dd6601d1dd965342591febc059c72e94\"" Nov 7 23:58:44.473611 containerd[1559]: time="2025-11-07T23:58:44.473577729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 7 23:58:44.491422 systemd[1]: Started cri-containerd-8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07.scope - libcontainer container 8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07. Nov 7 23:58:44.491593 containerd[1559]: time="2025-11-07T23:58:44.491495661Z" level=info msg="connecting to shim b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573" address="unix:///run/containerd/s/da9bbc2b043cd0f402ac12739e4ee7538460e56d7c14c75dd03ee015484a333c" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:44.502470 systemd-networkd[1474]: cali6b4dd1d155d: Link UP Nov 7 23:58:44.502697 systemd-networkd[1474]: cali6b4dd1d155d: Gained carrier Nov 7 23:58:44.513634 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:44.524546 systemd[1]: Started cri-containerd-b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573.scope - libcontainer container b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573. Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.012 [INFO][4447] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.033 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--c265d-eth0 goldmane-666569f655- calico-system da1ff15e-acf5-415a-98c1-50e005ef7778 855 0 2025-11-07 23:58:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-c265d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6b4dd1d155d [] [] }} ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.033 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-eth0" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.081 [INFO][4499] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" HandleID="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Workload="localhost-k8s-goldmane--666569f655--c265d-eth0" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.082 [INFO][4499] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" HandleID="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Workload="localhost-k8s-goldmane--666569f655--c265d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b0050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-c265d", "timestamp":"2025-11-07 23:58:44.081941475 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.082 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.388 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.388 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.450 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.461 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.468 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.472 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.475 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.475 [INFO][4499] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.479 [INFO][4499] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668 Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.486 [INFO][4499] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.496 [INFO][4499] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.496 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" host="localhost" Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.496 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:44.532738 containerd[1559]: 2025-11-07 23:58:44.496 [INFO][4499] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" HandleID="k8s-pod-network.82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Workload="localhost-k8s-goldmane--666569f655--c265d-eth0" Nov 7 23:58:44.533419 containerd[1559]: 2025-11-07 23:58:44.500 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--c265d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"da1ff15e-acf5-415a-98c1-50e005ef7778", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-c265d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6b4dd1d155d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.533419 containerd[1559]: 2025-11-07 23:58:44.500 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-eth0" Nov 7 23:58:44.533419 containerd[1559]: 2025-11-07 23:58:44.500 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b4dd1d155d ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-eth0" Nov 7 23:58:44.533419 containerd[1559]: 2025-11-07 23:58:44.502 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-eth0" Nov 7 23:58:44.533419 containerd[1559]: 2025-11-07 23:58:44.503 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--c265d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"da1ff15e-acf5-415a-98c1-50e005ef7778", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668", Pod:"goldmane-666569f655-c265d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6b4dd1d155d", MAC:"aa:8c:66:2c:c9:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:44.533419 containerd[1559]: 2025-11-07 23:58:44.530 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" Namespace="calico-system" Pod="goldmane-666569f655-c265d" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--c265d-eth0" Nov 7 23:58:44.545122 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:44.559714 containerd[1559]: time="2025-11-07T23:58:44.559664352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpzjc,Uid:454b412a-04e6-4e0b-a20f-e2ceec9ccb01,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07\"" Nov 7 23:58:44.561190 kubelet[2728]: E1107 23:58:44.561120 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:44.579759 containerd[1559]: time="2025-11-07T23:58:44.579722040Z" level=info msg="CreateContainer within sandbox \"8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 7 23:58:44.598973 containerd[1559]: time="2025-11-07T23:58:44.598905889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-4k992,Uid:1892480e-9728-4d8f-8844-1e28e6326f1c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7032ee0c4e8f4bc916d7a0ecc070fe1ec796113b641dc0642506b2eacd91573\"" Nov 7 23:58:44.688829 containerd[1559]: time="2025-11-07T23:58:44.688682706Z" level=info msg="connecting to shim 82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668" address="unix:///run/containerd/s/6d079d6d5caf6170e05ad85d52e5d52e87da08e5dafbdf3b715d1043b22c6092" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:44.701225 containerd[1559]: time="2025-11-07T23:58:44.701181406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:44.702632 containerd[1559]: time="2025-11-07T23:58:44.702599084Z" level=info msg="Container bf80d71cbfab44667cc73deefbfa3a69bf29013dd402e4572a41b80af548fb4a: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:44.708461 containerd[1559]: time="2025-11-07T23:58:44.708398595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 7 23:58:44.708840 containerd[1559]: time="2025-11-07T23:58:44.708515034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 7 23:58:44.708880 kubelet[2728]: E1107 23:58:44.708703 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:58:44.708880 kubelet[2728]: E1107 23:58:44.708753 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:58:44.709131 kubelet[2728]: E1107 23:58:44.709014 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c8b6c5fd5-s4pn6_calico-system(a4f3eb8b-111d-48cd-8798-ab0004f10d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:44.709938 containerd[1559]: time="2025-11-07T23:58:44.709648953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:58:44.710230 kubelet[2728]: E1107 23:58:44.710172 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" podUID="a4f3eb8b-111d-48cd-8798-ab0004f10d75" Nov 7 23:58:44.726321 containerd[1559]: time="2025-11-07T23:58:44.726272206Z" level=info msg="CreateContainer within sandbox \"8b4af99c7659418d2b28b25dd72e71e462eabdc1c131f69d4a58b50797541f07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf80d71cbfab44667cc73deefbfa3a69bf29013dd402e4572a41b80af548fb4a\"" Nov 7 23:58:44.727367 containerd[1559]: time="2025-11-07T23:58:44.727338724Z" level=info msg="StartContainer for \"bf80d71cbfab44667cc73deefbfa3a69bf29013dd402e4572a41b80af548fb4a\"" Nov 7 23:58:44.728367 systemd[1]: Started cri-containerd-82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668.scope - libcontainer container 82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668. Nov 7 23:58:44.729498 containerd[1559]: time="2025-11-07T23:58:44.729452961Z" level=info msg="connecting to shim bf80d71cbfab44667cc73deefbfa3a69bf29013dd402e4572a41b80af548fb4a" address="unix:///run/containerd/s/469479b389a845f3d86f09eda54d83ff5744708df39cbcfdaec72d4c7e0196c1" protocol=ttrpc version=3 Nov 7 23:58:44.746495 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:44.759369 systemd[1]: Started cri-containerd-bf80d71cbfab44667cc73deefbfa3a69bf29013dd402e4572a41b80af548fb4a.scope - libcontainer container bf80d71cbfab44667cc73deefbfa3a69bf29013dd402e4572a41b80af548fb4a. Nov 7 23:58:44.782712 containerd[1559]: time="2025-11-07T23:58:44.782667396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c265d,Uid:da1ff15e-acf5-415a-98c1-50e005ef7778,Namespace:calico-system,Attempt:0,} returns sandbox id \"82794725b5d4ccbe2c6263e197fc41fabc3f308a0c9bb245b1064397868d3668\"" Nov 7 23:58:44.808435 containerd[1559]: time="2025-11-07T23:58:44.808375995Z" level=info msg="StartContainer for \"bf80d71cbfab44667cc73deefbfa3a69bf29013dd402e4572a41b80af548fb4a\" returns successfully" Nov 7 23:58:44.878640 containerd[1559]: time="2025-11-07T23:58:44.878308883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-lml9g,Uid:976ce995-eb57-4c78-bcd8-fe6b36d7dd8e,Namespace:calico-apiserver,Attempt:0,}" Nov 7 23:58:44.920161 containerd[1559]: time="2025-11-07T23:58:44.920100017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:44.923295 containerd[1559]: time="2025-11-07T23:58:44.923197812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:58:44.923448 containerd[1559]: time="2025-11-07T23:58:44.923335371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:58:44.924425 kubelet[2728]: E1107 23:58:44.924373 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:58:44.925124 kubelet[2728]: E1107 23:58:44.924435 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:58:44.925124 kubelet[2728]: E1107 23:58:44.924804 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mvz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-676df99ff5-4k992_calico-apiserver(1892480e-9728-4d8f-8844-1e28e6326f1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:44.925276 containerd[1559]: time="2025-11-07T23:58:44.924988329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 7 23:58:44.926524 kubelet[2728]: E1107 23:58:44.926440 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" podUID="1892480e-9728-4d8f-8844-1e28e6326f1c" Nov 7 23:58:45.063951 systemd-networkd[1474]: calidc85772703c: Link UP Nov 7 23:58:45.064907 systemd-networkd[1474]: calidc85772703c: Gained carrier Nov 7 23:58:45.082097 kubelet[2728]: E1107 23:58:45.081731 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.920 [INFO][4757] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.943 [INFO][4757] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0 calico-apiserver-676df99ff5- calico-apiserver 976ce995-eb57-4c78-bcd8-fe6b36d7dd8e 853 0 2025-11-07 23:58:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:676df99ff5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-676df99ff5-lml9g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc85772703c [] [] }} ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.943 [INFO][4757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.991 [INFO][4774] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" HandleID="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Workload="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.991 [INFO][4774] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" HandleID="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Workload="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-676df99ff5-lml9g", "timestamp":"2025-11-07 23:58:44.991550583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.991 [INFO][4774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.991 [INFO][4774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:44.991 [INFO][4774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.009 [INFO][4774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.016 [INFO][4774] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.023 [INFO][4774] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.026 [INFO][4774] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.036 [INFO][4774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.036 [INFO][4774] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.038 [INFO][4774] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352 Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.046 [INFO][4774] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.054 [INFO][4774] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.054 [INFO][4774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" host="localhost" Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.055 [INFO][4774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:45.085509 containerd[1559]: 2025-11-07 23:58:45.055 [INFO][4774] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" HandleID="k8s-pod-network.1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Workload="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" Nov 7 23:58:45.086895 containerd[1559]: 2025-11-07 23:58:45.060 [INFO][4757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0", GenerateName:"calico-apiserver-676df99ff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"976ce995-eb57-4c78-bcd8-fe6b36d7dd8e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676df99ff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-676df99ff5-lml9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc85772703c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:45.086895 containerd[1559]: 2025-11-07 23:58:45.060 [INFO][4757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" Nov 7 23:58:45.086895 containerd[1559]: 2025-11-07 23:58:45.060 [INFO][4757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc85772703c ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" Nov 7 23:58:45.086895 containerd[1559]: 2025-11-07 23:58:45.065 [INFO][4757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" Nov 7 23:58:45.086895 containerd[1559]: 2025-11-07 23:58:45.065 [INFO][4757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0", GenerateName:"calico-apiserver-676df99ff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"976ce995-eb57-4c78-bcd8-fe6b36d7dd8e", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"676df99ff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352", Pod:"calico-apiserver-676df99ff5-lml9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc85772703c", MAC:"16:db:fa:ed:95:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:45.086895 containerd[1559]: 2025-11-07 23:58:45.080 [INFO][4757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" Namespace="calico-apiserver" Pod="calico-apiserver-676df99ff5-lml9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--676df99ff5--lml9g-eth0" Nov 7 23:58:45.090366 kubelet[2728]: E1107 23:58:45.090309 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" podUID="a4f3eb8b-111d-48cd-8798-ab0004f10d75" Nov 7 23:58:45.098377 kubelet[2728]: E1107 23:58:45.098315 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" podUID="1892480e-9728-4d8f-8844-1e28e6326f1c" Nov 7 23:58:45.109038 kubelet[2728]: I1107 23:58:45.108321 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bpzjc" podStartSLOduration=42.108303487 podStartE2EDuration="42.108303487s" podCreationTimestamp="2025-11-07 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:58:45.10592845 +0000 UTC m=+48.444230809" watchObservedRunningTime="2025-11-07 23:58:45.108303487 +0000 UTC m=+48.446605806" Nov 7 23:58:45.136581 containerd[1559]: time="2025-11-07T23:58:45.136532085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:45.138064 containerd[1559]: time="2025-11-07T23:58:45.137986002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 7 23:58:45.138182 containerd[1559]: time="2025-11-07T23:58:45.138096842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 7 23:58:45.138332 kubelet[2728]: E1107 23:58:45.138294 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:58:45.138414 kubelet[2728]: E1107 23:58:45.138346 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:58:45.138537 kubelet[2728]: E1107 23:58:45.138480 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js879,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c265d_calico-system(da1ff15e-acf5-415a-98c1-50e005ef7778): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:45.144522 kubelet[2728]: E1107 23:58:45.140225 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c265d" podUID="da1ff15e-acf5-415a-98c1-50e005ef7778" Nov 7 23:58:45.145089 containerd[1559]: time="2025-11-07T23:58:45.145054512Z" level=info msg="connecting to shim 1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352" address="unix:///run/containerd/s/8a3b3bb56f926ed414bc66f4e013acb2441d07b7a84f4acd3d80b9e9c8fee2e1" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:45.181353 systemd[1]: Started cri-containerd-1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352.scope - libcontainer container 1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352. Nov 7 23:58:45.199690 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:45.228532 containerd[1559]: time="2025-11-07T23:58:45.228436747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-676df99ff5-lml9g,Uid:976ce995-eb57-4c78-bcd8-fe6b36d7dd8e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1f332bf8df373f52ceb7023619b199009173d1b08f4afd36e9895facad001352\"" Nov 7 23:58:45.231088 containerd[1559]: time="2025-11-07T23:58:45.231040103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:58:45.446637 containerd[1559]: time="2025-11-07T23:58:45.446583861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:45.467757 containerd[1559]: time="2025-11-07T23:58:45.467674949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:58:45.468003 containerd[1559]: time="2025-11-07T23:58:45.467792829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:58:45.468203 kubelet[2728]: E1107 23:58:45.468122 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:58:45.468385 kubelet[2728]: E1107 23:58:45.468271 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:58:45.468725 kubelet[2728]: E1107 23:58:45.468681 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6wkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-676df99ff5-lml9g_calico-apiserver(976ce995-eb57-4c78-bcd8-fe6b36d7dd8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:45.470051 kubelet[2728]: E1107 23:58:45.469996 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" podUID="976ce995-eb57-4c78-bcd8-fe6b36d7dd8e" Nov 7 23:58:45.519349 systemd-networkd[1474]: cali67f256eabdc: Gained IPv6LL Nov 7 23:58:45.711352 systemd-networkd[1474]: cali6b4dd1d155d: Gained IPv6LL Nov 7 23:58:45.838289 systemd-networkd[1474]: cali023cee95153: Gained IPv6LL Nov 7 23:58:45.878872 kubelet[2728]: E1107 23:58:45.878507 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:45.880113 containerd[1559]: time="2025-11-07T23:58:45.879379373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hhp66,Uid:0f5caead-ec81-47ca-97c7-f88bc4e0d10c,Namespace:kube-system,Attempt:0,}" Nov 7 23:58:45.987869 systemd-networkd[1474]: cali087d700c4eb: Link UP Nov 7 23:58:45.988533 systemd-networkd[1474]: cali087d700c4eb: Gained carrier Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.902 [INFO][4858] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.919 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hhp66-eth0 coredns-674b8bbfcf- kube-system 0f5caead-ec81-47ca-97c7-f88bc4e0d10c 857 0 2025-11-07 23:58:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hhp66 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali087d700c4eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.919 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.944 [INFO][4872] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" HandleID="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Workload="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.944 [INFO][4872] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" HandleID="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Workload="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hhp66", "timestamp":"2025-11-07 23:58:45.944813315 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.945 [INFO][4872] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.945 [INFO][4872] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.945 [INFO][4872] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.955 [INFO][4872] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.960 [INFO][4872] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.965 [INFO][4872] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.967 [INFO][4872] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.969 [INFO][4872] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.969 [INFO][4872] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.971 [INFO][4872] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74 Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.976 [INFO][4872] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.982 [INFO][4872] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.983 [INFO][4872] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" host="localhost" Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.983 [INFO][4872] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 7 23:58:46.003258 containerd[1559]: 2025-11-07 23:58:45.983 [INFO][4872] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" HandleID="k8s-pod-network.0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Workload="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" Nov 7 23:58:46.003862 containerd[1559]: 2025-11-07 23:58:45.986 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hhp66-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f5caead-ec81-47ca-97c7-f88bc4e0d10c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hhp66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali087d700c4eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:46.003862 containerd[1559]: 2025-11-07 23:58:45.986 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" Nov 7 23:58:46.003862 containerd[1559]: 2025-11-07 23:58:45.986 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali087d700c4eb ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" Nov 7 23:58:46.003862 containerd[1559]: 2025-11-07 23:58:45.989 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" Nov 7 23:58:46.003862 containerd[1559]: 2025-11-07 23:58:45.990 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hhp66-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f5caead-ec81-47ca-97c7-f88bc4e0d10c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 7, 23, 58, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74", Pod:"coredns-674b8bbfcf-hhp66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali087d700c4eb", MAC:"72:27:55:9e:a1:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 7 23:58:46.003862 containerd[1559]: 2025-11-07 23:58:46.000 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" Namespace="kube-system" Pod="coredns-674b8bbfcf-hhp66" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hhp66-eth0" Nov 7 23:58:46.032071 containerd[1559]: time="2025-11-07T23:58:46.031549748Z" level=info msg="connecting to shim 0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74" address="unix:///run/containerd/s/629382e9e4222dfb4898043e2c77780c24432768c6614ab6120a6bbd53f379aa" namespace=k8s.io protocol=ttrpc version=3 Nov 7 23:58:46.058356 systemd[1]: Started cri-containerd-0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74.scope - libcontainer container 0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74. Nov 7 23:58:46.071423 systemd-resolved[1269]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 7 23:58:46.100013 containerd[1559]: time="2025-11-07T23:58:46.099896012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hhp66,Uid:0f5caead-ec81-47ca-97c7-f88bc4e0d10c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74\"" Nov 7 23:58:46.100878 kubelet[2728]: E1107 23:58:46.100818 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:46.101868 kubelet[2728]: E1107 23:58:46.101844 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:46.102043 kubelet[2728]: E1107 23:58:46.102007 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" podUID="976ce995-eb57-4c78-bcd8-fe6b36d7dd8e" Nov 7 23:58:46.102122 kubelet[2728]: E1107 23:58:46.102015 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c265d" podUID="da1ff15e-acf5-415a-98c1-50e005ef7778" Nov 7 23:58:46.103014 kubelet[2728]: E1107 23:58:46.102974 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" podUID="a4f3eb8b-111d-48cd-8798-ab0004f10d75" Nov 7 23:58:46.103225 kubelet[2728]: E1107 23:58:46.103175 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" podUID="1892480e-9728-4d8f-8844-1e28e6326f1c" Nov 7 23:58:46.107022 containerd[1559]: time="2025-11-07T23:58:46.106973962Z" level=info msg="CreateContainer within sandbox \"0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 7 23:58:46.124378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343592823.mount: Deactivated successfully. Nov 7 23:58:46.131206 containerd[1559]: time="2025-11-07T23:58:46.131107928Z" level=info msg="Container 98e3eaa7480c4e0232024e48aa4ebedae5988ef3e97b1b34ad550f3842f1ba06: CDI devices from CRI Config.CDIDevices: []" Nov 7 23:58:46.137966 containerd[1559]: time="2025-11-07T23:58:46.137918919Z" level=info msg="CreateContainer within sandbox \"0851da0cbff675019a5ec41e50ce2ed5c2b936ea966031904cb8687272cbeb74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98e3eaa7480c4e0232024e48aa4ebedae5988ef3e97b1b34ad550f3842f1ba06\"" Nov 7 23:58:46.138786 containerd[1559]: time="2025-11-07T23:58:46.138497118Z" level=info msg="StartContainer for \"98e3eaa7480c4e0232024e48aa4ebedae5988ef3e97b1b34ad550f3842f1ba06\"" Nov 7 23:58:46.140796 containerd[1559]: time="2025-11-07T23:58:46.140760275Z" level=info msg="connecting to shim 98e3eaa7480c4e0232024e48aa4ebedae5988ef3e97b1b34ad550f3842f1ba06" address="unix:///run/containerd/s/629382e9e4222dfb4898043e2c77780c24432768c6614ab6120a6bbd53f379aa" protocol=ttrpc version=3 Nov 7 23:58:46.159277 systemd-networkd[1474]: calidc85772703c: Gained IPv6LL Nov 7 23:58:46.172364 systemd[1]: Started cri-containerd-98e3eaa7480c4e0232024e48aa4ebedae5988ef3e97b1b34ad550f3842f1ba06.scope - libcontainer container 98e3eaa7480c4e0232024e48aa4ebedae5988ef3e97b1b34ad550f3842f1ba06. Nov 7 23:58:46.205505 containerd[1559]: time="2025-11-07T23:58:46.205451584Z" level=info msg="StartContainer for \"98e3eaa7480c4e0232024e48aa4ebedae5988ef3e97b1b34ad550f3842f1ba06\" returns successfully" Nov 7 23:58:46.414335 systemd-networkd[1474]: calic2aed1999c6: Gained IPv6LL Nov 7 23:58:46.659324 kubelet[2728]: I1107 23:58:46.659246 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 7 23:58:46.659829 kubelet[2728]: E1107 23:58:46.659811 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:47.080706 systemd-networkd[1474]: vxlan.calico: Link UP Nov 7 23:58:47.080715 systemd-networkd[1474]: vxlan.calico: Gained carrier Nov 7 23:58:47.116944 kubelet[2728]: E1107 23:58:47.115744 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:47.116944 kubelet[2728]: E1107 23:58:47.116764 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" podUID="976ce995-eb57-4c78-bcd8-fe6b36d7dd8e" Nov 7 23:58:47.116944 kubelet[2728]: E1107 23:58:47.116796 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:47.117321 kubelet[2728]: E1107 23:58:47.117094 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:47.171174 kubelet[2728]: I1107 23:58:47.170559 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hhp66" podStartSLOduration=44.170538285 podStartE2EDuration="44.170538285s" podCreationTimestamp="2025-11-07 23:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-07 23:58:47.135722611 +0000 UTC m=+50.474024970" watchObservedRunningTime="2025-11-07 23:58:47.170538285 +0000 UTC m=+50.508840644" Nov 7 23:58:47.414390 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:41034.service - OpenSSH per-connection server daemon (10.0.0.1:41034). Nov 7 23:58:47.490417 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 41034 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:58:47.492292 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:58:47.497218 systemd-logind[1536]: New session 10 of user core. Nov 7 23:58:47.508350 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 7 23:58:47.682085 sshd[5136]: Connection closed by 10.0.0.1 port 41034 Nov 7 23:58:47.682358 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:47.693303 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:41034.service: Deactivated successfully. Nov 7 23:58:47.695256 systemd[1]: session-10.scope: Deactivated successfully. Nov 7 23:58:47.696011 systemd-logind[1536]: Session 10 logged out. Waiting for processes to exit. Nov 7 23:58:47.699658 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:41044.service - OpenSSH per-connection server daemon (10.0.0.1:41044). Nov 7 23:58:47.701925 systemd-logind[1536]: Removed session 10. Nov 7 23:58:47.758625 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 41044 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:58:47.760677 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:58:47.764988 systemd-logind[1536]: New session 11 of user core. Nov 7 23:58:47.775357 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 7 23:58:47.960885 sshd[5155]: Connection closed by 10.0.0.1 port 41044 Nov 7 23:58:47.962409 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:47.975323 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:41044.service: Deactivated successfully. Nov 7 23:58:47.978223 systemd[1]: session-11.scope: Deactivated successfully. Nov 7 23:58:47.979103 systemd-logind[1536]: Session 11 logged out. Waiting for processes to exit. Nov 7 23:58:47.982337 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:41050.service - OpenSSH per-connection server daemon (10.0.0.1:41050). Nov 7 23:58:47.983831 systemd-logind[1536]: Removed session 11. Nov 7 23:58:48.014285 systemd-networkd[1474]: cali087d700c4eb: Gained IPv6LL Nov 7 23:58:48.043636 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 41050 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:58:48.045083 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:58:48.048995 systemd-logind[1536]: New session 12 of user core. Nov 7 23:58:48.055319 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 7 23:58:48.113735 kubelet[2728]: E1107 23:58:48.113706 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:48.239840 sshd[5173]: Connection closed by 10.0.0.1 port 41050 Nov 7 23:58:48.240878 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:48.246667 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:41050.service: Deactivated successfully. Nov 7 23:58:48.251209 systemd[1]: session-12.scope: Deactivated successfully. Nov 7 23:58:48.252924 systemd-logind[1536]: Session 12 logged out. Waiting for processes to exit. Nov 7 23:58:48.255817 systemd-logind[1536]: Removed session 12. Nov 7 23:58:48.590278 systemd-networkd[1474]: vxlan.calico: Gained IPv6LL Nov 7 23:58:49.118694 kubelet[2728]: E1107 23:58:49.118652 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:58:50.883108 containerd[1559]: time="2025-11-07T23:58:50.882274968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 7 23:58:51.082650 containerd[1559]: time="2025-11-07T23:58:51.082483316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:51.094461 containerd[1559]: time="2025-11-07T23:58:51.094353264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 7 23:58:51.094461 containerd[1559]: time="2025-11-07T23:58:51.094432944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 7 23:58:51.094643 kubelet[2728]: E1107 23:58:51.094591 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:58:51.094927 kubelet[2728]: E1107 23:58:51.094641 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:58:51.094927 kubelet[2728]: E1107 23:58:51.094760 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc1b59121ea94ac4b69e69083cbd3f64,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4m6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d955cb58-r6ltp_calico-system(1590c298-7505-47ec-a11a-671c25a2d127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:51.100077 containerd[1559]: time="2025-11-07T23:58:51.100038739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 7 23:58:51.331477 containerd[1559]: time="2025-11-07T23:58:51.331428343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:51.332870 containerd[1559]: time="2025-11-07T23:58:51.332818422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 7 23:58:51.332953 containerd[1559]: time="2025-11-07T23:58:51.332855582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 7 23:58:51.333148 kubelet[2728]: E1107 23:58:51.333062 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:58:51.333274 kubelet[2728]: E1107 23:58:51.333195 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:58:51.333609 kubelet[2728]: E1107 23:58:51.333541 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4m6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d955cb58-r6ltp_calico-system(1590c298-7505-47ec-a11a-671c25a2d127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:51.335398 kubelet[2728]: E1107 23:58:51.335284 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d955cb58-r6ltp" podUID="1590c298-7505-47ec-a11a-671c25a2d127" Nov 7 23:58:53.253950 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:40226.service - OpenSSH per-connection server daemon (10.0.0.1:40226). Nov 7 23:58:53.325329 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 40226 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:58:53.327009 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:58:53.332224 systemd-logind[1536]: New session 13 of user core. Nov 7 23:58:53.341367 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 7 23:58:53.517253 sshd[5206]: Connection closed by 10.0.0.1 port 40226 Nov 7 23:58:53.517299 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:53.521433 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:40226.service: Deactivated successfully. Nov 7 23:58:53.523947 systemd[1]: session-13.scope: Deactivated successfully. Nov 7 23:58:53.525021 systemd-logind[1536]: Session 13 logged out. Waiting for processes to exit. Nov 7 23:58:53.526526 systemd-logind[1536]: Removed session 13. Nov 7 23:58:56.885312 containerd[1559]: time="2025-11-07T23:58:56.885008425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 7 23:58:57.112045 containerd[1559]: time="2025-11-07T23:58:57.111975303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:57.113054 containerd[1559]: time="2025-11-07T23:58:57.112990982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 7 23:58:57.113107 containerd[1559]: time="2025-11-07T23:58:57.113080062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 7 23:58:57.115100 kubelet[2728]: E1107 23:58:57.113299 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:58:57.115100 kubelet[2728]: E1107 23:58:57.113349 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:58:57.115100 kubelet[2728]: E1107 23:58:57.113481 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:57.122513 containerd[1559]: time="2025-11-07T23:58:57.122124656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 7 23:58:57.329387 containerd[1559]: time="2025-11-07T23:58:57.329205513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:57.332673 containerd[1559]: time="2025-11-07T23:58:57.332623071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 7 23:58:57.332885 containerd[1559]: time="2025-11-07T23:58:57.332710071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 7 23:58:57.333359 kubelet[2728]: E1107 23:58:57.333078 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:58:57.333359 kubelet[2728]: E1107 23:58:57.333128 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:58:57.333359 kubelet[2728]: E1107 23:58:57.333276 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:57.336252 kubelet[2728]: E1107 23:58:57.335354 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:58:58.537579 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:40230.service - OpenSSH per-connection server daemon (10.0.0.1:40230). Nov 7 23:58:58.602476 sshd[5230]: Accepted publickey for core from 10.0.0.1 port 40230 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:58:58.604119 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:58:58.611248 systemd-logind[1536]: New session 14 of user core. Nov 7 23:58:58.618343 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 7 23:58:58.764214 sshd[5233]: Connection closed by 10.0.0.1 port 40230 Nov 7 23:58:58.764058 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Nov 7 23:58:58.768114 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:40230.service: Deactivated successfully. Nov 7 23:58:58.770692 systemd[1]: session-14.scope: Deactivated successfully. Nov 7 23:58:58.771859 systemd-logind[1536]: Session 14 logged out. Waiting for processes to exit. Nov 7 23:58:58.773959 systemd-logind[1536]: Removed session 14. Nov 7 23:58:58.879540 containerd[1559]: time="2025-11-07T23:58:58.879396201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:58:59.078480 containerd[1559]: time="2025-11-07T23:58:59.078421516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:59.079455 containerd[1559]: time="2025-11-07T23:58:59.079345555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:58:59.079455 containerd[1559]: time="2025-11-07T23:58:59.079429635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:58:59.079719 kubelet[2728]: E1107 23:58:59.079667 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:58:59.080027 kubelet[2728]: E1107 23:58:59.079720 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:58:59.080063 containerd[1559]: time="2025-11-07T23:58:59.080030595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 7 23:58:59.080088 kubelet[2728]: E1107 23:58:59.080045 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6wkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-676df99ff5-lml9g_calico-apiserver(976ce995-eb57-4c78-bcd8-fe6b36d7dd8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:59.081500 kubelet[2728]: E1107 23:58:59.081435 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" podUID="976ce995-eb57-4c78-bcd8-fe6b36d7dd8e" Nov 7 23:58:59.277224 containerd[1559]: time="2025-11-07T23:58:59.277083835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:58:59.279332 containerd[1559]: time="2025-11-07T23:58:59.279271074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 7 23:58:59.279332 containerd[1559]: time="2025-11-07T23:58:59.279316194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 7 23:58:59.279596 kubelet[2728]: E1107 23:58:59.279532 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:58:59.279596 kubelet[2728]: E1107 23:58:59.279596 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:58:59.279842 kubelet[2728]: E1107 23:58:59.279736 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js879,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c265d_calico-system(da1ff15e-acf5-415a-98c1-50e005ef7778): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 7 23:58:59.281259 kubelet[2728]: E1107 23:58:59.281201 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c265d" podUID="da1ff15e-acf5-415a-98c1-50e005ef7778" Nov 7 23:58:59.879032 containerd[1559]: time="2025-11-07T23:58:59.878987830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 7 23:59:00.104329 containerd[1559]: time="2025-11-07T23:59:00.104283782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:00.106572 containerd[1559]: time="2025-11-07T23:59:00.106524981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 7 23:59:00.106622 containerd[1559]: time="2025-11-07T23:59:00.106579661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 7 23:59:00.106780 kubelet[2728]: E1107 23:59:00.106741 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:59:00.107000 kubelet[2728]: E1107 23:59:00.106795 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:59:00.107000 kubelet[2728]: E1107 23:59:00.106951 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c8b6c5fd5-s4pn6_calico-system(a4f3eb8b-111d-48cd-8798-ab0004f10d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:00.108242 kubelet[2728]: E1107 23:59:00.108179 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" podUID="a4f3eb8b-111d-48cd-8798-ab0004f10d75" Nov 7 23:59:00.889328 containerd[1559]: time="2025-11-07T23:59:00.889293240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:59:01.092717 containerd[1559]: time="2025-11-07T23:59:01.092568101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:01.093566 containerd[1559]: time="2025-11-07T23:59:01.093507702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:59:01.093637 containerd[1559]: time="2025-11-07T23:59:01.093572702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:59:01.093791 kubelet[2728]: E1107 23:59:01.093737 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:59:01.093868 kubelet[2728]: E1107 23:59:01.093790 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:59:01.093986 kubelet[2728]: E1107 23:59:01.093933 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mvz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-676df99ff5-4k992_calico-apiserver(1892480e-9728-4d8f-8844-1e28e6326f1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:01.095176 kubelet[2728]: E1107 23:59:01.095114 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" podUID="1892480e-9728-4d8f-8844-1e28e6326f1c" Nov 7 23:59:03.776472 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:48320.service - OpenSSH per-connection server daemon (10.0.0.1:48320). Nov 7 23:59:03.851621 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 48320 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:03.853874 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:03.858450 systemd-logind[1536]: New session 15 of user core. Nov 7 23:59:03.868357 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 7 23:59:03.878737 kubelet[2728]: E1107 23:59:03.878691 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:59:03.990173 sshd[5250]: Connection closed by 10.0.0.1 port 48320 Nov 7 23:59:03.990786 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:03.995401 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:48320.service: Deactivated successfully. Nov 7 23:59:03.998732 systemd[1]: session-15.scope: Deactivated successfully. Nov 7 23:59:03.999568 systemd-logind[1536]: Session 15 logged out. Waiting for processes to exit. Nov 7 23:59:04.000737 systemd-logind[1536]: Removed session 15. Nov 7 23:59:04.879424 kubelet[2728]: E1107 23:59:04.879237 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d955cb58-r6ltp" podUID="1590c298-7505-47ec-a11a-671c25a2d127" Nov 7 23:59:07.143327 kubelet[2728]: E1107 23:59:07.143297 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:59:09.009786 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:48334.service - OpenSSH per-connection server daemon (10.0.0.1:48334). Nov 7 23:59:09.090731 sshd[5302]: Accepted publickey for core from 10.0.0.1 port 48334 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:09.093378 sshd-session[5302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:09.100262 systemd-logind[1536]: New session 16 of user core. Nov 7 23:59:09.109357 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 7 23:59:09.242104 sshd[5305]: Connection closed by 10.0.0.1 port 48334 Nov 7 23:59:09.242552 sshd-session[5302]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:09.252010 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:48334.service: Deactivated successfully. Nov 7 23:59:09.253941 systemd[1]: session-16.scope: Deactivated successfully. Nov 7 23:59:09.255949 systemd-logind[1536]: Session 16 logged out. Waiting for processes to exit. Nov 7 23:59:09.258917 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:45748.service - OpenSSH per-connection server daemon (10.0.0.1:45748). Nov 7 23:59:09.260400 systemd-logind[1536]: Removed session 16. Nov 7 23:59:09.319739 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 45748 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:09.321994 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:09.327608 systemd-logind[1536]: New session 17 of user core. Nov 7 23:59:09.337381 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 7 23:59:09.607194 sshd[5322]: Connection closed by 10.0.0.1 port 45748 Nov 7 23:59:09.607710 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:09.625198 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:45748.service: Deactivated successfully. Nov 7 23:59:09.628015 systemd[1]: session-17.scope: Deactivated successfully. Nov 7 23:59:09.629031 systemd-logind[1536]: Session 17 logged out. Waiting for processes to exit. Nov 7 23:59:09.632647 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:45764.service - OpenSSH per-connection server daemon (10.0.0.1:45764). Nov 7 23:59:09.633408 systemd-logind[1536]: Removed session 17. Nov 7 23:59:09.707633 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 45764 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:09.709065 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:09.715798 systemd-logind[1536]: New session 18 of user core. Nov 7 23:59:09.728394 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 7 23:59:09.878332 kubelet[2728]: E1107 23:59:09.878194 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:59:09.880910 kubelet[2728]: E1107 23:59:09.879935 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" podUID="976ce995-eb57-4c78-bcd8-fe6b36d7dd8e" Nov 7 23:59:09.880910 kubelet[2728]: E1107 23:59:09.880352 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:59:10.327472 sshd[5337]: Connection closed by 10.0.0.1 port 45764 Nov 7 23:59:10.330317 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:10.338089 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:45764.service: Deactivated successfully. Nov 7 23:59:10.341106 systemd[1]: session-18.scope: Deactivated successfully. Nov 7 23:59:10.342471 systemd-logind[1536]: Session 18 logged out. Waiting for processes to exit. Nov 7 23:59:10.348531 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:45768.service - OpenSSH per-connection server daemon (10.0.0.1:45768). Nov 7 23:59:10.349892 systemd-logind[1536]: Removed session 18. Nov 7 23:59:10.410672 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 45768 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:10.412039 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:10.416412 systemd-logind[1536]: New session 19 of user core. Nov 7 23:59:10.426337 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 7 23:59:10.790633 sshd[5360]: Connection closed by 10.0.0.1 port 45768 Nov 7 23:59:10.790915 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:10.805980 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:45768.service: Deactivated successfully. Nov 7 23:59:10.808478 systemd[1]: session-19.scope: Deactivated successfully. Nov 7 23:59:10.810549 systemd-logind[1536]: Session 19 logged out. Waiting for processes to exit. Nov 7 23:59:10.813584 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:45780.service - OpenSSH per-connection server daemon (10.0.0.1:45780). Nov 7 23:59:10.816670 systemd-logind[1536]: Removed session 19. Nov 7 23:59:10.873399 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 45780 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:10.874781 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:10.880269 systemd-logind[1536]: New session 20 of user core. Nov 7 23:59:10.887371 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 7 23:59:11.057347 sshd[5375]: Connection closed by 10.0.0.1 port 45780 Nov 7 23:59:11.057445 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:11.062844 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:45780.service: Deactivated successfully. Nov 7 23:59:11.064901 systemd[1]: session-20.scope: Deactivated successfully. Nov 7 23:59:11.066209 systemd-logind[1536]: Session 20 logged out. Waiting for processes to exit. Nov 7 23:59:11.068663 systemd-logind[1536]: Removed session 20. Nov 7 23:59:11.878948 kubelet[2728]: E1107 23:59:11.878433 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" podUID="a4f3eb8b-111d-48cd-8798-ab0004f10d75" Nov 7 23:59:11.878948 kubelet[2728]: E1107 23:59:11.878808 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" podUID="1892480e-9728-4d8f-8844-1e28e6326f1c" Nov 7 23:59:13.877865 kubelet[2728]: E1107 23:59:13.877821 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 7 23:59:13.878911 kubelet[2728]: E1107 23:59:13.878587 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c265d" podUID="da1ff15e-acf5-415a-98c1-50e005ef7778" Nov 7 23:59:15.880162 containerd[1559]: time="2025-11-07T23:59:15.880092207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 7 23:59:16.071535 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:45784.service - OpenSSH per-connection server daemon (10.0.0.1:45784). Nov 7 23:59:16.097557 containerd[1559]: time="2025-11-07T23:59:16.097516023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:16.101911 containerd[1559]: time="2025-11-07T23:59:16.101848866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 7 23:59:16.101911 containerd[1559]: time="2025-11-07T23:59:16.101903826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 7 23:59:16.102198 kubelet[2728]: E1107 23:59:16.102160 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:59:16.102478 kubelet[2728]: E1107 23:59:16.102212 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 7 23:59:16.102478 kubelet[2728]: E1107 23:59:16.102333 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dc1b59121ea94ac4b69e69083cbd3f64,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4m6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d955cb58-r6ltp_calico-system(1590c298-7505-47ec-a11a-671c25a2d127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:16.106989 containerd[1559]: time="2025-11-07T23:59:16.105841229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 7 23:59:16.154157 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 45784 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:16.156356 sshd-session[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:16.162785 systemd-logind[1536]: New session 21 of user core. Nov 7 23:59:16.171373 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 7 23:59:16.292051 sshd[5396]: Connection closed by 10.0.0.1 port 45784 Nov 7 23:59:16.292442 sshd-session[5393]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:16.296700 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:45784.service: Deactivated successfully. Nov 7 23:59:16.299827 systemd[1]: session-21.scope: Deactivated successfully. Nov 7 23:59:16.300621 systemd-logind[1536]: Session 21 logged out. Waiting for processes to exit. Nov 7 23:59:16.301836 systemd-logind[1536]: Removed session 21. Nov 7 23:59:16.320546 containerd[1559]: time="2025-11-07T23:59:16.320370080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:16.323425 containerd[1559]: time="2025-11-07T23:59:16.323302082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 7 23:59:16.323425 containerd[1559]: time="2025-11-07T23:59:16.323341082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 7 23:59:16.323582 kubelet[2728]: E1107 23:59:16.323547 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:59:16.323621 kubelet[2728]: E1107 23:59:16.323601 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 7 23:59:16.324074 kubelet[2728]: E1107 23:59:16.324025 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4m6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d955cb58-r6ltp_calico-system(1590c298-7505-47ec-a11a-671c25a2d127): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:16.325293 kubelet[2728]: E1107 23:59:16.325227 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d955cb58-r6ltp" podUID="1590c298-7505-47ec-a11a-671c25a2d127" Nov 7 23:59:21.308197 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:32796.service - OpenSSH per-connection server daemon (10.0.0.1:32796). Nov 7 23:59:21.376702 sshd[5410]: Accepted publickey for core from 10.0.0.1 port 32796 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:21.378226 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:21.385374 systemd-logind[1536]: New session 22 of user core. Nov 7 23:59:21.402392 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 7 23:59:21.535121 sshd[5413]: Connection closed by 10.0.0.1 port 32796 Nov 7 23:59:21.535736 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:21.539897 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:32796.service: Deactivated successfully. Nov 7 23:59:21.541948 systemd[1]: session-22.scope: Deactivated successfully. Nov 7 23:59:21.543768 systemd-logind[1536]: Session 22 logged out. Waiting for processes to exit. Nov 7 23:59:21.544889 systemd-logind[1536]: Removed session 22. Nov 7 23:59:21.880485 containerd[1559]: time="2025-11-07T23:59:21.880427734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 7 23:59:22.104998 containerd[1559]: time="2025-11-07T23:59:22.104925209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:22.105919 containerd[1559]: time="2025-11-07T23:59:22.105880369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 7 23:59:22.106023 containerd[1559]: time="2025-11-07T23:59:22.105958769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 7 23:59:22.106200 kubelet[2728]: E1107 23:59:22.106117 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:59:22.106200 kubelet[2728]: E1107 23:59:22.106194 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 7 23:59:22.106530 kubelet[2728]: E1107 23:59:22.106311 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:22.109312 containerd[1559]: time="2025-11-07T23:59:22.109237132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 7 23:59:22.323575 containerd[1559]: time="2025-11-07T23:59:22.323193877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:22.324660 containerd[1559]: time="2025-11-07T23:59:22.324618918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 7 23:59:22.324720 containerd[1559]: time="2025-11-07T23:59:22.324640598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 7 23:59:22.324867 kubelet[2728]: E1107 23:59:22.324827 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:59:22.324929 kubelet[2728]: E1107 23:59:22.324882 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 7 23:59:22.325047 kubelet[2728]: E1107 23:59:22.325008 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zwgj8_calico-system(59a419f6-34bd-4030-8aca-5c108260b7ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:22.326517 kubelet[2728]: E1107 23:59:22.326450 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zwgj8" podUID="59a419f6-34bd-4030-8aca-5c108260b7ed" Nov 7 23:59:22.880849 containerd[1559]: time="2025-11-07T23:59:22.880807656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:59:23.095990 containerd[1559]: time="2025-11-07T23:59:23.095935721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:23.096943 containerd[1559]: time="2025-11-07T23:59:23.096910202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:59:23.097005 containerd[1559]: time="2025-11-07T23:59:23.096986202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:59:23.097176 kubelet[2728]: E1107 23:59:23.097122 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:59:23.097267 kubelet[2728]: E1107 23:59:23.097190 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:59:23.097381 kubelet[2728]: E1107 23:59:23.097339 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6wkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-676df99ff5-lml9g_calico-apiserver(976ce995-eb57-4c78-bcd8-fe6b36d7dd8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:23.098831 kubelet[2728]: E1107 23:59:23.098800 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-lml9g" podUID="976ce995-eb57-4c78-bcd8-fe6b36d7dd8e" Nov 7 23:59:23.878681 containerd[1559]: time="2025-11-07T23:59:23.878609720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 7 23:59:24.098859 containerd[1559]: time="2025-11-07T23:59:24.098805185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:24.099888 containerd[1559]: time="2025-11-07T23:59:24.099844785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 7 23:59:24.099960 containerd[1559]: time="2025-11-07T23:59:24.099936865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 7 23:59:24.100178 kubelet[2728]: E1107 23:59:24.100118 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:59:24.100859 kubelet[2728]: E1107 23:59:24.100189 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 7 23:59:24.100859 kubelet[2728]: E1107 23:59:24.100315 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mvz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-676df99ff5-4k992_calico-apiserver(1892480e-9728-4d8f-8844-1e28e6326f1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:24.101637 kubelet[2728]: E1107 23:59:24.101599 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-676df99ff5-4k992" podUID="1892480e-9728-4d8f-8844-1e28e6326f1c" Nov 7 23:59:25.878951 containerd[1559]: time="2025-11-07T23:59:25.878716641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 7 23:59:26.081844 containerd[1559]: time="2025-11-07T23:59:26.081416568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:26.082733 containerd[1559]: time="2025-11-07T23:59:26.082679168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 7 23:59:26.082787 containerd[1559]: time="2025-11-07T23:59:26.082770049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 7 23:59:26.083012 kubelet[2728]: E1107 23:59:26.082966 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:59:26.083323 kubelet[2728]: E1107 23:59:26.083025 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 7 23:59:26.083323 kubelet[2728]: E1107 23:59:26.083175 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js879,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c265d_calico-system(da1ff15e-acf5-415a-98c1-50e005ef7778): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:26.084635 kubelet[2728]: E1107 23:59:26.084578 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c265d" podUID="da1ff15e-acf5-415a-98c1-50e005ef7778" Nov 7 23:59:26.554035 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:32812.service - OpenSSH per-connection server daemon (10.0.0.1:32812). Nov 7 23:59:26.623163 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 32812 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 7 23:59:26.626080 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 7 23:59:26.637606 systemd-logind[1536]: New session 23 of user core. Nov 7 23:59:26.642446 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 7 23:59:26.809355 sshd[5431]: Connection closed by 10.0.0.1 port 32812 Nov 7 23:59:26.809623 sshd-session[5428]: pam_unix(sshd:session): session closed for user core Nov 7 23:59:26.813434 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:32812.service: Deactivated successfully. Nov 7 23:59:26.815479 systemd[1]: session-23.scope: Deactivated successfully. Nov 7 23:59:26.817240 systemd-logind[1536]: Session 23 logged out. Waiting for processes to exit. Nov 7 23:59:26.818649 systemd-logind[1536]: Removed session 23. Nov 7 23:59:26.879809 containerd[1559]: time="2025-11-07T23:59:26.879769819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 7 23:59:27.094239 containerd[1559]: time="2025-11-07T23:59:27.094194749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 7 23:59:27.095150 containerd[1559]: time="2025-11-07T23:59:27.095108310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 7 23:59:27.095228 containerd[1559]: time="2025-11-07T23:59:27.095170350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 7 23:59:27.095429 kubelet[2728]: E1107 23:59:27.095381 2728 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:59:27.095837 kubelet[2728]: E1107 23:59:27.095446 2728 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 7 23:59:27.095837 kubelet[2728]: E1107 23:59:27.095620 2728 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbdf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c8b6c5fd5-s4pn6_calico-system(a4f3eb8b-111d-48cd-8798-ab0004f10d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 7 23:59:27.097182 kubelet[2728]: E1107 23:59:27.097128 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8b6c5fd5-s4pn6" podUID="a4f3eb8b-111d-48cd-8798-ab0004f10d75"