Nov 4 12:19:45.333306 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 4 12:19:45.333330 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Nov 4 10:59:33 -00 2025 Nov 4 12:19:45.333338 kernel: KASLR enabled Nov 4 12:19:45.333345 kernel: efi: EFI v2.7 by EDK II Nov 4 12:19:45.333351 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 4 12:19:45.333357 kernel: random: crng init done Nov 4 12:19:45.333364 kernel: secureboot: Secure boot disabled Nov 4 12:19:45.333370 kernel: ACPI: Early table checksum verification disabled Nov 4 12:19:45.333378 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 4 12:19:45.333384 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 4 12:19:45.333390 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333396 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333402 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333409 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333417 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333424 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333430 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333437 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333443 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:19:45.333449 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 4 12:19:45.333456 kernel: ACPI: Use ACPI SPCR as default console: No Nov 4 12:19:45.333462 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:19:45.333470 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 4 12:19:45.333477 kernel: Zone ranges: Nov 4 12:19:45.333483 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:19:45.333490 kernel: DMA32 empty Nov 4 12:19:45.333496 kernel: Normal empty Nov 4 12:19:45.333502 kernel: Device empty Nov 4 12:19:45.333508 kernel: Movable zone start for each node Nov 4 12:19:45.333515 kernel: Early memory node ranges Nov 4 12:19:45.333521 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 4 12:19:45.333528 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 4 12:19:45.333534 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 4 12:19:45.333541 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 4 12:19:45.333548 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 4 12:19:45.333555 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 4 12:19:45.333561 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 4 12:19:45.333568 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 4 12:19:45.333574 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 4 12:19:45.333581 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 4 12:19:45.333591 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 4 12:19:45.333598 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 4 12:19:45.333611 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 4 12:19:45.333619 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:19:45.333626 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 4 12:19:45.333633 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 4 12:19:45.333640 kernel: psci: probing for conduit method from ACPI. Nov 4 12:19:45.333647 kernel: psci: PSCIv1.1 detected in firmware. Nov 4 12:19:45.333655 kernel: psci: Using standard PSCI v0.2 function IDs Nov 4 12:19:45.333662 kernel: psci: Trusted OS migration not required Nov 4 12:19:45.333669 kernel: psci: SMC Calling Convention v1.1 Nov 4 12:19:45.333676 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 4 12:19:45.333683 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 4 12:19:45.333690 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 4 12:19:45.333698 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 4 12:19:45.333713 kernel: Detected PIPT I-cache on CPU0 Nov 4 12:19:45.333720 kernel: CPU features: detected: GIC system register CPU interface Nov 4 12:19:45.333727 kernel: CPU features: detected: Spectre-v4 Nov 4 12:19:45.333734 kernel: CPU features: detected: Spectre-BHB Nov 4 12:19:45.333744 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 4 12:19:45.333751 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 4 12:19:45.333758 kernel: CPU features: detected: ARM erratum 1418040 Nov 4 12:19:45.333765 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 4 12:19:45.333772 kernel: alternatives: applying boot alternatives Nov 4 12:19:45.333780 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=03857d169a2df39cb9cf428f5c3ec4e76f72bbd8ea41fdc44c442b7e7c3fbee3 Nov 4 12:19:45.333787 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 12:19:45.333795 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 12:19:45.333802 kernel: Fallback order for Node 0: 0 Nov 4 12:19:45.333809 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 4 12:19:45.333817 kernel: Policy zone: DMA Nov 4 12:19:45.333824 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 12:19:45.333831 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 4 12:19:45.333838 kernel: software IO TLB: area num 4. Nov 4 12:19:45.333845 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 4 12:19:45.333852 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 4 12:19:45.333859 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 12:19:45.333865 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 12:19:45.333873 kernel: rcu: RCU event tracing is enabled. Nov 4 12:19:45.333880 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 12:19:45.333887 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 12:19:45.333896 kernel: Tracing variant of Tasks RCU enabled. Nov 4 12:19:45.333903 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 12:19:45.333909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 12:19:45.333916 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 12:19:45.333923 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 12:19:45.333930 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 4 12:19:45.333937 kernel: GICv3: 256 SPIs implemented Nov 4 12:19:45.333944 kernel: GICv3: 0 Extended SPIs implemented Nov 4 12:19:45.333951 kernel: Root IRQ handler: gic_handle_irq Nov 4 12:19:45.333957 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 4 12:19:45.333964 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 4 12:19:45.333972 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 4 12:19:45.333979 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 4 12:19:45.333987 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 4 12:19:45.333994 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 4 12:19:45.334001 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 4 12:19:45.334008 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 4 12:19:45.334015 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 12:19:45.334022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:19:45.334028 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 4 12:19:45.334035 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 4 12:19:45.334043 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 4 12:19:45.334051 kernel: arm-pv: using stolen time PV Nov 4 12:19:45.334058 kernel: Console: colour dummy device 80x25 Nov 4 12:19:45.334065 kernel: ACPI: Core revision 20240827 Nov 4 12:19:45.334073 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 4 12:19:45.334090 kernel: pid_max: default: 32768 minimum: 301 Nov 4 12:19:45.334099 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 12:19:45.334106 kernel: landlock: Up and running. Nov 4 12:19:45.334113 kernel: SELinux: Initializing. Nov 4 12:19:45.334122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 12:19:45.334129 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 12:19:45.334136 kernel: rcu: Hierarchical SRCU implementation. Nov 4 12:19:45.334144 kernel: rcu: Max phase no-delay instances is 400. Nov 4 12:19:45.334151 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 12:19:45.334158 kernel: Remapping and enabling EFI services. Nov 4 12:19:45.334165 kernel: smp: Bringing up secondary CPUs ... Nov 4 12:19:45.334174 kernel: Detected PIPT I-cache on CPU1 Nov 4 12:19:45.334185 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 4 12:19:45.334194 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 4 12:19:45.334202 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:19:45.334209 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 4 12:19:45.334216 kernel: Detected PIPT I-cache on CPU2 Nov 4 12:19:45.334224 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 4 12:19:45.334233 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 4 12:19:45.334241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:19:45.334248 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 4 12:19:45.334256 kernel: Detected PIPT I-cache on CPU3 Nov 4 12:19:45.334280 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 4 12:19:45.334287 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 4 12:19:45.334295 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:19:45.334303 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 4 12:19:45.334311 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 12:19:45.334319 kernel: SMP: Total of 4 processors activated. Nov 4 12:19:45.334326 kernel: CPU: All CPU(s) started at EL1 Nov 4 12:19:45.334334 kernel: CPU features: detected: 32-bit EL0 Support Nov 4 12:19:45.334341 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 4 12:19:45.334349 kernel: CPU features: detected: Common not Private translations Nov 4 12:19:45.334358 kernel: CPU features: detected: CRC32 instructions Nov 4 12:19:45.334365 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 4 12:19:45.334373 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 4 12:19:45.334380 kernel: CPU features: detected: LSE atomic instructions Nov 4 12:19:45.334388 kernel: CPU features: detected: Privileged Access Never Nov 4 12:19:45.334395 kernel: CPU features: detected: RAS Extension Support Nov 4 12:19:45.334403 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 4 12:19:45.334410 kernel: alternatives: applying system-wide alternatives Nov 4 12:19:45.334419 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 4 12:19:45.334427 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Nov 4 12:19:45.334435 kernel: devtmpfs: initialized Nov 4 12:19:45.334443 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 12:19:45.334450 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 12:19:45.334458 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 4 12:19:45.334465 kernel: 0 pages in range for non-PLT usage Nov 4 12:19:45.334474 kernel: 515056 pages in range for PLT usage Nov 4 12:19:45.334481 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 12:19:45.334489 kernel: SMBIOS 3.0.0 present. Nov 4 12:19:45.334496 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 4 12:19:45.334503 kernel: DMI: Memory slots populated: 1/1 Nov 4 12:19:45.334511 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 12:19:45.334518 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 4 12:19:45.334527 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 4 12:19:45.334536 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 4 12:19:45.334543 kernel: audit: initializing netlink subsys (disabled) Nov 4 12:19:45.334551 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Nov 4 12:19:45.334558 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 12:19:45.334566 kernel: cpuidle: using governor menu Nov 4 12:19:45.334574 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 4 12:19:45.334583 kernel: ASID allocator initialised with 32768 entries Nov 4 12:19:45.334592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 12:19:45.334602 kernel: Serial: AMBA PL011 UART driver Nov 4 12:19:45.334610 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 12:19:45.334618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 12:19:45.334625 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 4 12:19:45.334633 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 4 12:19:45.334640 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 12:19:45.334649 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 12:19:45.334657 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 4 12:19:45.334664 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 4 12:19:45.334672 kernel: ACPI: Added _OSI(Module Device) Nov 4 12:19:45.334679 kernel: ACPI: Added _OSI(Processor Device) Nov 4 12:19:45.334687 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 12:19:45.334694 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 12:19:45.334708 kernel: ACPI: Interpreter enabled Nov 4 12:19:45.334715 kernel: ACPI: Using GIC for interrupt routing Nov 4 12:19:45.334723 kernel: ACPI: MCFG table detected, 1 entries Nov 4 12:19:45.334730 kernel: ACPI: CPU0 has been hot-added Nov 4 12:19:45.334738 kernel: ACPI: CPU1 has been hot-added Nov 4 12:19:45.334746 kernel: ACPI: CPU2 has been hot-added Nov 4 12:19:45.334753 kernel: ACPI: CPU3 has been hot-added Nov 4 12:19:45.334761 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 4 12:19:45.334771 kernel: printk: legacy console [ttyAMA0] enabled Nov 4 12:19:45.334779 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 12:19:45.334932 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 12:19:45.335020 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 4 12:19:45.335116 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 4 12:19:45.335207 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 4 12:19:45.335286 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 4 12:19:45.335296 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 4 12:19:45.335304 kernel: PCI host bridge to bus 0000:00 Nov 4 12:19:45.335387 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 4 12:19:45.335459 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 4 12:19:45.335534 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 4 12:19:45.335604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 12:19:45.335705 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 4 12:19:45.335801 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 12:19:45.335889 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 4 12:19:45.335969 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 4 12:19:45.336050 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 4 12:19:45.336152 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 4 12:19:45.336237 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 4 12:19:45.336330 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 4 12:19:45.336418 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 4 12:19:45.336491 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 4 12:19:45.336566 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 4 12:19:45.336576 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 4 12:19:45.336584 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 4 12:19:45.336591 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 4 12:19:45.336599 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 4 12:19:45.336606 kernel: iommu: Default domain type: Translated Nov 4 12:19:45.336616 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 4 12:19:45.336624 kernel: efivars: Registered efivars operations Nov 4 12:19:45.336631 kernel: vgaarb: loaded Nov 4 12:19:45.336638 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 4 12:19:45.336646 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 12:19:45.336653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 12:19:45.336661 kernel: pnp: PnP ACPI init Nov 4 12:19:45.336759 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 4 12:19:45.336772 kernel: pnp: PnP ACPI: found 1 devices Nov 4 12:19:45.336780 kernel: NET: Registered PF_INET protocol family Nov 4 12:19:45.336788 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 12:19:45.336796 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 12:19:45.336804 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 12:19:45.336812 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 12:19:45.336821 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 12:19:45.336828 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 12:19:45.336836 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 12:19:45.336844 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 12:19:45.336852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 12:19:45.336859 kernel: PCI: CLS 0 bytes, default 64 Nov 4 12:19:45.336867 kernel: kvm [1]: HYP mode not available Nov 4 12:19:45.336876 kernel: Initialise system trusted keyrings Nov 4 12:19:45.336884 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 12:19:45.336891 kernel: Key type asymmetric registered Nov 4 12:19:45.336899 kernel: Asymmetric key parser 'x509' registered Nov 4 12:19:45.336906 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 4 12:19:45.336914 kernel: io scheduler mq-deadline registered Nov 4 12:19:45.336921 kernel: io scheduler kyber registered Nov 4 12:19:45.336932 kernel: io scheduler bfq registered Nov 4 12:19:45.336940 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 4 12:19:45.336953 kernel: ACPI: button: Power Button [PWRB] Nov 4 12:19:45.336962 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 4 12:19:45.337044 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 4 12:19:45.337054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 12:19:45.337062 kernel: thunder_xcv, ver 1.0 Nov 4 12:19:45.337071 kernel: thunder_bgx, ver 1.0 Nov 4 12:19:45.337078 kernel: nicpf, ver 1.0 Nov 4 12:19:45.337095 kernel: nicvf, ver 1.0 Nov 4 12:19:45.337190 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 4 12:19:45.337267 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-04T12:19:44 UTC (1762258784) Nov 4 12:19:45.337277 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 4 12:19:45.337285 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 4 12:19:45.337296 kernel: watchdog: NMI not fully supported Nov 4 12:19:45.337304 kernel: watchdog: Hard watchdog permanently disabled Nov 4 12:19:45.337311 kernel: NET: Registered PF_INET6 protocol family Nov 4 12:19:45.337318 kernel: Segment Routing with IPv6 Nov 4 12:19:45.337326 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 12:19:45.337333 kernel: NET: Registered PF_PACKET protocol family Nov 4 12:19:45.337341 kernel: Key type dns_resolver registered Nov 4 12:19:45.337349 kernel: registered taskstats version 1 Nov 4 12:19:45.337357 kernel: Loading compiled-in X.509 certificates Nov 4 12:19:45.337365 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 663f57c0d83c90dfacd5aa64fd10e0e7f59b6b15' Nov 4 12:19:45.337372 kernel: Demotion targets for Node 0: null Nov 4 12:19:45.337380 kernel: Key type .fscrypt registered Nov 4 12:19:45.337387 kernel: Key type fscrypt-provisioning registered Nov 4 12:19:45.337395 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 12:19:45.337403 kernel: ima: Allocated hash algorithm: sha1 Nov 4 12:19:45.337411 kernel: ima: No architecture policies found Nov 4 12:19:45.337419 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 4 12:19:45.337426 kernel: clk: Disabling unused clocks Nov 4 12:19:45.337440 kernel: PM: genpd: Disabling unused power domains Nov 4 12:19:45.337448 kernel: Freeing unused kernel memory: 12992K Nov 4 12:19:45.337455 kernel: Run /init as init process Nov 4 12:19:45.337465 kernel: with arguments: Nov 4 12:19:45.337472 kernel: /init Nov 4 12:19:45.337480 kernel: with environment: Nov 4 12:19:45.337487 kernel: HOME=/ Nov 4 12:19:45.337495 kernel: TERM=linux Nov 4 12:19:45.337594 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 4 12:19:45.337674 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 12:19:45.337686 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 12:19:45.337694 kernel: GPT:16515071 != 27000831 Nov 4 12:19:45.337708 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 12:19:45.337715 kernel: GPT:16515071 != 27000831 Nov 4 12:19:45.337723 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 12:19:45.337730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 12:19:45.337740 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337748 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337756 kernel: SCSI subsystem initialized Nov 4 12:19:45.337764 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337771 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 12:19:45.337778 kernel: device-mapper: uevent: version 1.0.3 Nov 4 12:19:45.337786 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 12:19:45.337795 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 4 12:19:45.337803 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337810 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337817 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337824 kernel: raid6: neonx8 gen() 15127 MB/s Nov 4 12:19:45.337832 kernel: raid6: neonx4 gen() 14854 MB/s Nov 4 12:19:45.337839 kernel: raid6: neonx2 gen() 12874 MB/s Nov 4 12:19:45.337847 kernel: raid6: neonx1 gen() 10051 MB/s Nov 4 12:19:45.337855 kernel: raid6: int64x8 gen() 6889 MB/s Nov 4 12:19:45.337863 kernel: raid6: int64x4 gen() 7349 MB/s Nov 4 12:19:45.337870 kernel: raid6: int64x2 gen() 6106 MB/s Nov 4 12:19:45.337878 kernel: raid6: int64x1 gen() 5018 MB/s Nov 4 12:19:45.337885 kernel: raid6: using algorithm neonx8 gen() 15127 MB/s Nov 4 12:19:45.337893 kernel: raid6: .... xor() 11815 MB/s, rmw enabled Nov 4 12:19:45.337901 kernel: raid6: using neon recovery algorithm Nov 4 12:19:45.337910 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337917 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337924 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337932 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337939 kernel: xor: measuring software checksum speed Nov 4 12:19:45.337947 kernel: 8regs : 21636 MB/sec Nov 4 12:19:45.337954 kernel: 32regs : 21699 MB/sec Nov 4 12:19:45.337962 kernel: arm64_neon : 25972 MB/sec Nov 4 12:19:45.337970 kernel: xor: using function: arm64_neon (25972 MB/sec) Nov 4 12:19:45.337979 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.337986 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 12:19:45.337994 kernel: BTRFS: device fsid a0f53245-1da9-4f46-990c-2f6a958947c8 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (205) Nov 4 12:19:45.338002 kernel: BTRFS info (device dm-0): first mount of filesystem a0f53245-1da9-4f46-990c-2f6a958947c8 Nov 4 12:19:45.338010 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:19:45.338018 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 12:19:45.338025 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 12:19:45.338034 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:19:45.338042 kernel: loop: module loaded Nov 4 12:19:45.338050 kernel: loop0: detected capacity change from 0 to 91464 Nov 4 12:19:45.338058 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 12:19:45.338066 systemd[1]: Successfully made /usr/ read-only. Nov 4 12:19:45.338077 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 12:19:45.338096 systemd[1]: Detected virtualization kvm. Nov 4 12:19:45.338104 systemd[1]: Detected architecture arm64. Nov 4 12:19:45.338112 systemd[1]: Running in initrd. Nov 4 12:19:45.338120 systemd[1]: No hostname configured, using default hostname. Nov 4 12:19:45.338129 systemd[1]: Hostname set to . Nov 4 12:19:45.338137 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 12:19:45.338147 systemd[1]: Queued start job for default target initrd.target. Nov 4 12:19:45.338155 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 12:19:45.338163 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:19:45.338172 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:19:45.338180 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 12:19:45.338188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 12:19:45.338198 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 12:19:45.338212 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 12:19:45.338222 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:19:45.338230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:19:45.338239 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 12:19:45.338250 systemd[1]: Reached target paths.target - Path Units. Nov 4 12:19:45.338258 systemd[1]: Reached target slices.target - Slice Units. Nov 4 12:19:45.338267 systemd[1]: Reached target swap.target - Swaps. Nov 4 12:19:45.338275 systemd[1]: Reached target timers.target - Timer Units. Nov 4 12:19:45.338284 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 12:19:45.338292 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 12:19:45.338300 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 12:19:45.338310 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 12:19:45.338318 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:19:45.338327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 12:19:45.338336 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:19:45.338344 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 12:19:45.338353 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 12:19:45.338362 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 12:19:45.338371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 12:19:45.338379 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 12:19:45.338388 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 12:19:45.338397 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 12:19:45.338405 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 12:19:45.338414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 12:19:45.338423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:19:45.338432 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 12:19:45.338441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:19:45.338449 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 12:19:45.338459 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 12:19:45.338485 systemd-journald[344]: Collecting audit messages is disabled. Nov 4 12:19:45.338504 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 12:19:45.338514 kernel: Bridge firewalling registered Nov 4 12:19:45.338523 systemd-journald[344]: Journal started Nov 4 12:19:45.338540 systemd-journald[344]: Runtime Journal (/run/log/journal/660623b0c7d14607ad00bd60f243731c) is 6M, max 48.5M, 42.4M free. Nov 4 12:19:45.336902 systemd-modules-load[345]: Inserted module 'br_netfilter' Nov 4 12:19:45.346961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 12:19:45.349323 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 12:19:45.349902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:19:45.353166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 12:19:45.354784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:19:45.356500 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 12:19:45.365913 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:19:45.368112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 12:19:45.373538 systemd-tmpfiles[364]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 12:19:45.377740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:19:45.379846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:19:45.382364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 12:19:45.385143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:19:45.388104 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 12:19:45.390106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 12:19:45.414994 dracut-cmdline[387]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=03857d169a2df39cb9cf428f5c3ec4e76f72bbd8ea41fdc44c442b7e7c3fbee3 Nov 4 12:19:45.436917 systemd-resolved[388]: Positive Trust Anchors: Nov 4 12:19:45.436933 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 12:19:45.436936 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 12:19:45.436967 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 12:19:45.459514 systemd-resolved[388]: Defaulting to hostname 'linux'. Nov 4 12:19:45.460719 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 12:19:45.461771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:19:45.493118 kernel: Loading iSCSI transport class v2.0-870. Nov 4 12:19:45.502120 kernel: iscsi: registered transport (tcp) Nov 4 12:19:45.514269 kernel: iscsi: registered transport (qla4xxx) Nov 4 12:19:45.514297 kernel: QLogic iSCSI HBA Driver Nov 4 12:19:45.534320 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 12:19:45.553942 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:19:45.555891 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 12:19:45.600135 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 12:19:45.602232 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 12:19:45.603577 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 12:19:45.637766 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 12:19:45.639947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:19:45.665425 systemd-udevd[627]: Using default interface naming scheme 'v257'. Nov 4 12:19:45.673000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:19:45.675481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 12:19:45.701221 dracut-pre-trigger[690]: rd.md=0: removing MD RAID activation Nov 4 12:19:45.703921 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 12:19:45.708449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 12:19:45.722754 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 12:19:45.727239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 12:19:45.750643 systemd-networkd[746]: lo: Link UP Nov 4 12:19:45.750650 systemd-networkd[746]: lo: Gained carrier Nov 4 12:19:45.751099 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 12:19:45.751954 systemd[1]: Reached target network.target - Network. Nov 4 12:19:45.785130 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:19:45.787402 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 12:19:45.824529 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 12:19:45.834747 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 12:19:45.841321 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 12:19:45.848400 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 12:19:45.855619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 12:19:45.856878 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 12:19:45.858989 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:19:45.861512 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 12:19:45.863957 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 12:19:45.866128 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 12:19:45.887362 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 12:19:45.887496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:19:45.890448 systemd-networkd[746]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:19:45.890459 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 12:19:45.890608 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:19:45.896773 disk-uuid[806]: Primary Header is updated. Nov 4 12:19:45.896773 disk-uuid[806]: Secondary Entries is updated. Nov 4 12:19:45.896773 disk-uuid[806]: Secondary Header is updated. Nov 4 12:19:45.891445 systemd-networkd[746]: eth0: Link UP Nov 4 12:19:45.892336 systemd-networkd[746]: eth0: Gained carrier Nov 4 12:19:45.892350 systemd-networkd[746]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:19:45.895097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:19:45.896213 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 12:19:45.909141 systemd-networkd[746]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 12:19:45.931201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:19:46.169456 systemd-resolved[388]: Detected conflict on linux IN A 10.0.0.73 Nov 4 12:19:46.169471 systemd-resolved[388]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Nov 4 12:19:46.930225 disk-uuid[813]: Warning: The kernel is still using the old partition table. Nov 4 12:19:46.930225 disk-uuid[813]: The new table will be used at the next reboot or after you Nov 4 12:19:46.930225 disk-uuid[813]: run partprobe(8) or kpartx(8) Nov 4 12:19:46.930225 disk-uuid[813]: The operation has completed successfully. Nov 4 12:19:46.935729 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 12:19:46.935837 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 12:19:46.938008 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 12:19:46.969961 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (831) Nov 4 12:19:46.970001 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:19:46.970013 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:19:46.973099 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:19:46.973130 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:19:46.978107 kernel: BTRFS info (device vda6): last unmount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:19:46.980168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 12:19:46.982130 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 12:19:47.084027 ignition[850]: Ignition 2.22.0 Nov 4 12:19:47.084728 ignition[850]: Stage: fetch-offline Nov 4 12:19:47.084770 ignition[850]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:19:47.084785 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:19:47.084865 ignition[850]: parsed url from cmdline: "" Nov 4 12:19:47.084868 ignition[850]: no config URL provided Nov 4 12:19:47.084872 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 12:19:47.084880 ignition[850]: no config at "/usr/lib/ignition/user.ign" Nov 4 12:19:47.084917 ignition[850]: op(1): [started] loading QEMU firmware config module Nov 4 12:19:47.084921 ignition[850]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 12:19:47.090859 ignition[850]: op(1): [finished] loading QEMU firmware config module Nov 4 12:19:47.133919 ignition[850]: parsing config with SHA512: 45f55790a403bc9c8c7145b366b30a8c04bbe617868476135b5f9f1d55e07be8bdb7631578e24cc49b80c0ad038a10309ecaa302ba47edaf79aa3cbd4b372eef Nov 4 12:19:47.139846 unknown[850]: fetched base config from "system" Nov 4 12:19:47.139858 unknown[850]: fetched user config from "qemu" Nov 4 12:19:47.140312 ignition[850]: fetch-offline: fetch-offline passed Nov 4 12:19:47.140375 ignition[850]: Ignition finished successfully Nov 4 12:19:47.142149 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 12:19:47.143640 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 12:19:47.144457 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 12:19:47.174002 ignition[864]: Ignition 2.22.0 Nov 4 12:19:47.174019 ignition[864]: Stage: kargs Nov 4 12:19:47.174170 ignition[864]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:19:47.174178 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:19:47.174912 ignition[864]: kargs: kargs passed Nov 4 12:19:47.174956 ignition[864]: Ignition finished successfully Nov 4 12:19:47.179768 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 12:19:47.181570 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 12:19:47.210196 ignition[872]: Ignition 2.22.0 Nov 4 12:19:47.210208 ignition[872]: Stage: disks Nov 4 12:19:47.210346 ignition[872]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:19:47.210354 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:19:47.213315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 12:19:47.211137 ignition[872]: disks: disks passed Nov 4 12:19:47.215101 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 12:19:47.211178 ignition[872]: Ignition finished successfully Nov 4 12:19:47.216054 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 12:19:47.217461 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 12:19:47.218826 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 12:19:47.220066 systemd[1]: Reached target basic.target - Basic System. Nov 4 12:19:47.222750 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 12:19:47.267760 systemd-fsck[882]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 12:19:47.272400 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 12:19:47.274426 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 12:19:47.342095 kernel: EXT4-fs (vda9): mounted filesystem 9b363c44-0d55-4856-b006-3e673304a340 r/w with ordered data mode. Quota mode: none. Nov 4 12:19:47.342609 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 12:19:47.343775 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 12:19:47.346655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 12:19:47.348768 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 12:19:47.349670 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 12:19:47.349713 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 12:19:47.349738 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 12:19:47.359494 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 12:19:47.361372 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 12:19:47.365150 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Nov 4 12:19:47.365176 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:19:47.367123 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:19:47.369693 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:19:47.369739 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:19:47.370642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 12:19:47.400867 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 12:19:47.404398 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Nov 4 12:19:47.408896 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 12:19:47.412597 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 12:19:47.478899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 12:19:47.482009 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 12:19:47.483670 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 12:19:47.503753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 12:19:47.506258 kernel: BTRFS info (device vda6): last unmount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:19:47.520346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 12:19:47.535297 ignition[1005]: INFO : Ignition 2.22.0 Nov 4 12:19:47.535297 ignition[1005]: INFO : Stage: mount Nov 4 12:19:47.536741 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:19:47.536741 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:19:47.536741 ignition[1005]: INFO : mount: mount passed Nov 4 12:19:47.536741 ignition[1005]: INFO : Ignition finished successfully Nov 4 12:19:47.538837 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 12:19:47.540822 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 12:19:47.676404 systemd-networkd[746]: eth0: Gained IPv6LL Nov 4 12:19:48.344316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 12:19:48.365515 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Nov 4 12:19:48.365562 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:19:48.365574 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:19:48.368542 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:19:48.368568 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:19:48.369894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 12:19:48.400846 ignition[1034]: INFO : Ignition 2.22.0 Nov 4 12:19:48.400846 ignition[1034]: INFO : Stage: files Nov 4 12:19:48.402249 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:19:48.402249 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:19:48.402249 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Nov 4 12:19:48.405071 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 12:19:48.405071 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 12:19:48.407406 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 12:19:48.407406 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 12:19:48.407406 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 12:19:48.407406 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 12:19:48.407406 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 4 12:19:48.405710 unknown[1034]: wrote ssh authorized keys file for user: core Nov 4 12:19:48.490796 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 12:19:48.621242 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 12:19:48.621242 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 12:19:48.624916 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 12:19:48.624916 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 12:19:48.624916 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 12:19:48.624916 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 12:19:48.624916 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 12:19:48.624916 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 12:19:48.624916 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 12:19:48.636226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 12:19:48.636226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 12:19:48.636226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:19:48.636226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:19:48.636226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:19:48.636226 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 4 12:19:49.037265 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 12:19:49.354236 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:19:49.354236 ignition[1034]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 12:19:49.357340 ignition[1034]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 12:19:49.358937 ignition[1034]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 12:19:49.358937 ignition[1034]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 12:19:49.358937 ignition[1034]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 12:19:49.358937 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 12:19:49.358937 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 12:19:49.358937 ignition[1034]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 12:19:49.358937 ignition[1034]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 12:19:49.374033 ignition[1034]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 12:19:49.377700 ignition[1034]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 12:19:49.378932 ignition[1034]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 12:19:49.378932 ignition[1034]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 12:19:49.378932 ignition[1034]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 12:19:49.378932 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 12:19:49.378932 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 12:19:49.378932 ignition[1034]: INFO : files: files passed Nov 4 12:19:49.378932 ignition[1034]: INFO : Ignition finished successfully Nov 4 12:19:49.380561 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 12:19:49.385170 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 12:19:49.387148 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 12:19:49.401596 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 12:19:49.401711 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 12:19:49.404839 initrd-setup-root-after-ignition[1066]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 12:19:49.406971 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:19:49.406971 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:19:49.409800 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:19:49.409810 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 12:19:49.411391 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 12:19:49.413957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 12:19:49.459590 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 12:19:49.459739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 12:19:49.461624 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 12:19:49.462422 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 12:19:49.464295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 12:19:49.465134 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 12:19:49.485683 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 12:19:49.489223 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 12:19:49.510397 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 12:19:49.510600 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:19:49.512220 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:19:49.513794 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 12:19:49.515196 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 12:19:49.515325 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 12:19:49.517411 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 12:19:49.519251 systemd[1]: Stopped target basic.target - Basic System. Nov 4 12:19:49.520536 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 12:19:49.522153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 12:19:49.523762 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 12:19:49.525559 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 12:19:49.527245 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 12:19:49.528969 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 12:19:49.530775 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 12:19:49.532457 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 12:19:49.533903 systemd[1]: Stopped target swap.target - Swaps. Nov 4 12:19:49.535387 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 12:19:49.535523 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 12:19:49.537454 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:19:49.539106 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:19:49.540794 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 12:19:49.544123 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:19:49.545149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 12:19:49.545277 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 12:19:49.547702 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 12:19:49.547819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 12:19:49.549517 systemd[1]: Stopped target paths.target - Path Units. Nov 4 12:19:49.550811 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 12:19:49.556123 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:19:49.557210 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 12:19:49.558870 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 12:19:49.560331 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 12:19:49.560421 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 12:19:49.561639 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 12:19:49.561725 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 12:19:49.563101 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 12:19:49.563218 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 12:19:49.564774 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 12:19:49.564876 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 12:19:49.567004 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 12:19:49.569250 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 12:19:49.570101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 12:19:49.570233 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:19:49.572160 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 12:19:49.572268 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:19:49.573743 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 12:19:49.573839 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 12:19:49.578901 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 12:19:49.585143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 12:19:49.595548 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 12:19:49.602887 ignition[1094]: INFO : Ignition 2.22.0 Nov 4 12:19:49.602887 ignition[1094]: INFO : Stage: umount Nov 4 12:19:49.602887 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:19:49.602887 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:19:49.602887 ignition[1094]: INFO : umount: umount passed Nov 4 12:19:49.602887 ignition[1094]: INFO : Ignition finished successfully Nov 4 12:19:49.604507 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 12:19:49.604621 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 12:19:49.606163 systemd[1]: Stopped target network.target - Network. Nov 4 12:19:49.607503 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 12:19:49.607559 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 12:19:49.608904 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 12:19:49.608954 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 12:19:49.610312 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 12:19:49.610358 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 12:19:49.611809 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 12:19:49.611850 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 12:19:49.613420 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 12:19:49.614804 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 12:19:49.625017 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 12:19:49.625163 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 12:19:49.629075 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 12:19:49.629198 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 12:19:49.636128 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 12:19:49.636997 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 12:19:49.638500 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 12:19:49.639549 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 12:19:49.639593 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:19:49.640979 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 12:19:49.641027 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 12:19:49.643696 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 12:19:49.644539 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 12:19:49.644598 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 12:19:49.646263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 12:19:49.646309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:19:49.647696 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 12:19:49.647734 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 12:19:49.649338 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:19:49.658338 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 12:19:49.659389 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:19:49.661396 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 12:19:49.661449 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 12:19:49.663023 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 12:19:49.663051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:19:49.664589 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 12:19:49.664635 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 12:19:49.666856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 12:19:49.666905 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 12:19:49.668537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 12:19:49.668591 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 12:19:49.671760 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 12:19:49.673365 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 12:19:49.673429 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:19:49.675182 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 12:19:49.675228 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:19:49.677031 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 12:19:49.677074 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:19:49.679128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 12:19:49.679171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:19:49.681292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 12:19:49.681334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:19:49.697729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 12:19:49.697848 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 12:19:49.699737 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 12:19:49.699834 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 12:19:49.701501 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 12:19:49.703214 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 12:19:49.712582 systemd[1]: Switching root. Nov 4 12:19:49.741982 systemd-journald[344]: Journal stopped Nov 4 12:19:50.480011 systemd-journald[344]: Received SIGTERM from PID 1 (systemd). Nov 4 12:19:50.480107 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 12:19:50.480129 kernel: SELinux: policy capability open_perms=1 Nov 4 12:19:50.480141 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 12:19:50.480152 kernel: SELinux: policy capability always_check_network=0 Nov 4 12:19:50.480162 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 12:19:50.480171 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 12:19:50.480184 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 12:19:50.480194 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 12:19:50.480205 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 12:19:50.480216 kernel: audit: type=1403 audit(1762258789.926:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 12:19:50.480229 systemd[1]: Successfully loaded SELinux policy in 61.231ms. Nov 4 12:19:50.480245 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.649ms. Nov 4 12:19:50.480257 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 12:19:50.480269 systemd[1]: Detected virtualization kvm. Nov 4 12:19:50.480280 systemd[1]: Detected architecture arm64. Nov 4 12:19:50.480292 systemd[1]: Detected first boot. Nov 4 12:19:50.480305 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 12:19:50.480316 zram_generator::config[1138]: No configuration found. Nov 4 12:19:50.480327 kernel: NET: Registered PF_VSOCK protocol family Nov 4 12:19:50.480337 systemd[1]: Populated /etc with preset unit settings. Nov 4 12:19:50.480348 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 12:19:50.480359 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 12:19:50.480479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 12:19:50.480492 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 12:19:50.480504 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 12:19:50.480515 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 12:19:50.480526 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 12:19:50.480537 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 12:19:50.480548 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 12:19:50.480560 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 12:19:50.480571 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 12:19:50.480582 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:19:50.480596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:19:50.480606 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 12:19:50.480617 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 12:19:50.480628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 12:19:50.480640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 12:19:50.480652 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 4 12:19:50.480673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:19:50.480685 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:19:50.480696 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 12:19:50.480706 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 12:19:50.480718 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 12:19:50.480728 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 12:19:50.480739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:19:50.480750 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 12:19:50.480761 systemd[1]: Reached target slices.target - Slice Units. Nov 4 12:19:50.480771 systemd[1]: Reached target swap.target - Swaps. Nov 4 12:19:50.480782 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 12:19:50.480797 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 12:19:50.480807 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 12:19:50.480819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:19:50.480830 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 12:19:50.480840 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:19:50.480851 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 12:19:50.480862 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 12:19:50.480875 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 12:19:50.480885 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 12:19:50.480929 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 12:19:50.480943 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 12:19:50.480954 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 12:19:50.480965 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 12:19:50.480976 systemd[1]: Reached target machines.target - Containers. Nov 4 12:19:50.480988 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 12:19:50.480999 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:19:50.481010 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 12:19:50.481020 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 12:19:50.481031 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:19:50.481041 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 12:19:50.481052 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 12:19:50.481064 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 12:19:50.481075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 12:19:50.481152 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 12:19:50.481165 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 12:19:50.481176 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 12:19:50.481188 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 12:19:50.481198 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 12:19:50.481212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:19:50.481222 kernel: fuse: init (API version 7.41) Nov 4 12:19:50.481233 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 12:19:50.481243 kernel: ACPI: bus type drm_connector registered Nov 4 12:19:50.481253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 12:19:50.481264 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 12:19:50.481274 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 12:19:50.481286 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 12:19:50.481296 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 12:19:50.481327 systemd-journald[1220]: Collecting audit messages is disabled. Nov 4 12:19:50.481352 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 12:19:50.481365 systemd-journald[1220]: Journal started Nov 4 12:19:50.481386 systemd-journald[1220]: Runtime Journal (/run/log/journal/660623b0c7d14607ad00bd60f243731c) is 6M, max 48.5M, 42.4M free. Nov 4 12:19:50.288471 systemd[1]: Queued start job for default target multi-user.target. Nov 4 12:19:50.303050 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 12:19:50.303475 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 12:19:50.484497 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 12:19:50.485437 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 12:19:50.486604 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 12:19:50.487512 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 12:19:50.488475 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 12:19:50.489519 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 12:19:50.491214 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 12:19:50.494129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:19:50.495704 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 12:19:50.495863 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 12:19:50.497352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:19:50.497512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:19:50.498867 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 12:19:50.499036 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 12:19:50.500429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 12:19:50.500586 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 12:19:50.502223 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 12:19:50.502388 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 12:19:50.503837 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 12:19:50.503990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 12:19:50.505405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 12:19:50.506829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:19:50.509316 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 12:19:50.510651 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 12:19:50.522684 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 12:19:50.523949 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 12:19:50.526056 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 12:19:50.527814 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 12:19:50.529045 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 12:19:50.529132 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 12:19:50.530948 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 12:19:50.532330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:19:50.539824 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 12:19:50.541872 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 12:19:50.543132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 12:19:50.543978 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 12:19:50.545209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 12:19:50.547289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:19:50.551496 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 12:19:50.553231 systemd-journald[1220]: Time spent on flushing to /var/log/journal/660623b0c7d14607ad00bd60f243731c is 16.102ms for 885 entries. Nov 4 12:19:50.553231 systemd-journald[1220]: System Journal (/var/log/journal/660623b0c7d14607ad00bd60f243731c) is 8M, max 163.5M, 155.5M free. Nov 4 12:19:50.573983 systemd-journald[1220]: Received client request to flush runtime journal. Nov 4 12:19:50.574018 kernel: loop1: detected capacity change from 0 to 119344 Nov 4 12:19:50.555251 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 12:19:50.559234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:19:50.560670 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 12:19:50.565026 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 12:19:50.567590 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 12:19:50.573887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:19:50.575714 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 12:19:50.577539 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Nov 4 12:19:50.577559 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Nov 4 12:19:50.579838 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 12:19:50.581968 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 12:19:50.583502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:19:50.592130 kernel: loop2: detected capacity change from 0 to 100624 Nov 4 12:19:50.592742 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 12:19:50.607353 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 12:19:50.615110 kernel: loop3: detected capacity change from 0 to 200800 Nov 4 12:19:50.624733 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 12:19:50.627632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 12:19:50.629404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 12:19:50.639104 kernel: loop4: detected capacity change from 0 to 119344 Nov 4 12:19:50.641605 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 12:19:50.647120 kernel: loop5: detected capacity change from 0 to 100624 Nov 4 12:19:50.649310 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Nov 4 12:19:50.649575 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Nov 4 12:19:50.652112 kernel: loop6: detected capacity change from 0 to 200800 Nov 4 12:19:50.652411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:19:50.659826 (sd-merge)[1279]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 12:19:50.662915 (sd-merge)[1279]: Merged extensions into '/usr'. Nov 4 12:19:50.666232 systemd[1]: Reload requested from client PID 1254 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 12:19:50.666251 systemd[1]: Reloading... Nov 4 12:19:50.712650 zram_generator::config[1311]: No configuration found. Nov 4 12:19:50.729754 systemd-resolved[1277]: Positive Trust Anchors: Nov 4 12:19:50.729773 systemd-resolved[1277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 12:19:50.729776 systemd-resolved[1277]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 12:19:50.729807 systemd-resolved[1277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 12:19:50.736155 systemd-resolved[1277]: Defaulting to hostname 'linux'. Nov 4 12:19:50.863671 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 12:19:50.863849 systemd[1]: Reloading finished in 197 ms. Nov 4 12:19:50.906806 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 12:19:50.907982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 12:19:50.909240 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 12:19:50.911945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:19:50.926485 systemd[1]: Starting ensure-sysext.service... Nov 4 12:19:50.928458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 12:19:50.941746 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 12:19:50.941785 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 12:19:50.942008 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 12:19:50.942219 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 12:19:50.942836 systemd-tmpfiles[1349]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 12:19:50.943021 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Nov 4 12:19:50.943064 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Nov 4 12:19:50.946384 systemd[1]: Reload requested from client PID 1348 ('systemctl') (unit ensure-sysext.service)... Nov 4 12:19:50.946397 systemd[1]: Reloading... Nov 4 12:19:50.946861 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 12:19:50.946867 systemd-tmpfiles[1349]: Skipping /boot Nov 4 12:19:50.952736 systemd-tmpfiles[1349]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 12:19:50.952749 systemd-tmpfiles[1349]: Skipping /boot Nov 4 12:19:50.992123 zram_generator::config[1382]: No configuration found. Nov 4 12:19:51.121245 systemd[1]: Reloading finished in 174 ms. Nov 4 12:19:51.130908 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 12:19:51.144396 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:19:51.152855 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 12:19:51.154838 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 12:19:51.166602 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 12:19:51.170299 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 12:19:51.174293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:19:51.176769 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 12:19:51.182894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:19:51.184273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:19:51.187301 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 12:19:51.189450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 12:19:51.190621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:19:51.190747 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:19:51.191596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:19:51.192158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:19:51.199182 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 12:19:51.211396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 12:19:51.211607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 12:19:51.216017 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 12:19:51.224615 systemd-udevd[1425]: Using default interface naming scheme 'v257'. Nov 4 12:19:51.225338 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 12:19:51.228632 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 12:19:51.229116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 12:19:51.234733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:19:51.236297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:19:51.241310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 12:19:51.242508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:19:51.242554 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:19:51.242600 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 12:19:51.242637 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 12:19:51.242901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:19:51.244474 systemd[1]: Finished ensure-sysext.service. Nov 4 12:19:51.250819 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 12:19:51.254284 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 12:19:51.269560 augenrules[1457]: No rules Nov 4 12:19:51.271046 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 12:19:51.275697 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 12:19:51.277774 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 12:19:51.277947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 12:19:51.280783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:19:51.281486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:19:51.287529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 12:19:51.354467 systemd-networkd[1467]: lo: Link UP Nov 4 12:19:51.354477 systemd-networkd[1467]: lo: Gained carrier Nov 4 12:19:51.355280 systemd-networkd[1467]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:19:51.355288 systemd-networkd[1467]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 12:19:51.356145 systemd-networkd[1467]: eth0: Link UP Nov 4 12:19:51.356272 systemd-networkd[1467]: eth0: Gained carrier Nov 4 12:19:51.356287 systemd-networkd[1467]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:19:51.356563 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 12:19:51.357620 systemd[1]: Reached target network.target - Network. Nov 4 12:19:51.361315 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 12:19:51.363897 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 12:19:51.365561 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 12:19:51.367725 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 4 12:19:51.374145 systemd-networkd[1467]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 12:19:51.376197 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Nov 4 12:19:51.377099 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 12:19:51.378974 systemd-timesyncd[1472]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 12:19:51.379030 systemd-timesyncd[1472]: Initial clock synchronization to Tue 2025-11-04 12:19:51.216787 UTC. Nov 4 12:19:51.386729 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 12:19:51.398596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 12:19:51.402966 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 12:19:51.427150 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 12:19:51.452408 ldconfig[1417]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 12:19:51.457451 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 12:19:51.463395 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 12:19:51.481376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:19:51.489140 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 12:19:51.518179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:19:51.520388 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 12:19:51.521294 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 12:19:51.522190 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 12:19:51.523281 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 12:19:51.524166 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 12:19:51.525077 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 12:19:51.525969 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 12:19:51.525998 systemd[1]: Reached target paths.target - Path Units. Nov 4 12:19:51.526831 systemd[1]: Reached target timers.target - Timer Units. Nov 4 12:19:51.528187 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 12:19:51.530193 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 12:19:51.532629 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 12:19:51.533766 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 12:19:51.534827 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 12:19:51.539847 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 12:19:51.541226 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 12:19:51.542596 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 12:19:51.543530 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 12:19:51.544244 systemd[1]: Reached target basic.target - Basic System. Nov 4 12:19:51.544945 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 12:19:51.544976 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 12:19:51.545873 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 12:19:51.547619 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 12:19:51.549254 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 12:19:51.551759 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 12:19:51.553815 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 12:19:51.554705 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 12:19:51.555619 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 12:19:51.557269 jq[1530]: false Nov 4 12:19:51.558213 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 12:19:51.559826 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 12:19:51.563453 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 12:19:51.565307 extend-filesystems[1531]: Found /dev/vda6 Nov 4 12:19:51.566347 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 12:19:51.567214 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 12:19:51.567287 extend-filesystems[1531]: Found /dev/vda9 Nov 4 12:19:51.567581 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 12:19:51.568179 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 12:19:51.569804 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 12:19:51.575101 extend-filesystems[1531]: Checking size of /dev/vda9 Nov 4 12:19:51.574123 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 12:19:51.575345 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 12:19:51.575526 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 12:19:51.576404 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 12:19:51.578136 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 12:19:51.585204 jq[1545]: true Nov 4 12:19:51.588063 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 12:19:51.588285 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 12:19:51.593258 extend-filesystems[1531]: Resized partition /dev/vda9 Nov 4 12:19:51.595837 extend-filesystems[1572]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 12:19:51.600230 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 12:19:51.608695 update_engine[1544]: I20251104 12:19:51.608469 1544 main.cc:92] Flatcar Update Engine starting Nov 4 12:19:51.611327 jq[1561]: true Nov 4 12:19:51.613461 dbus-daemon[1528]: [system] SELinux support is enabled Nov 4 12:19:51.614912 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 12:19:51.619112 update_engine[1544]: I20251104 12:19:51.618903 1544 update_check_scheduler.cc:74] Next update check in 8m3s Nov 4 12:19:51.619463 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 12:19:51.619490 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 12:19:51.620660 tar[1550]: linux-arm64/LICENSE Nov 4 12:19:51.622770 tar[1550]: linux-arm64/helm Nov 4 12:19:51.622553 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 12:19:51.622567 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 12:19:51.623998 systemd[1]: Started update-engine.service - Update Engine. Nov 4 12:19:51.629462 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 12:19:51.649110 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 12:19:51.666924 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (Power Button) Nov 4 12:19:51.668233 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 12:19:51.668233 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 12:19:51.668233 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 12:19:51.681692 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Nov 4 12:19:51.668632 systemd-logind[1542]: New seat seat0. Nov 4 12:19:51.669619 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 12:19:51.679292 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 12:19:51.679482 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 12:19:51.698686 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Nov 4 12:19:51.700141 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 12:19:51.700962 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 12:19:51.701696 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 12:19:51.766373 containerd[1562]: time="2025-11-04T12:19:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 12:19:51.766913 containerd[1562]: time="2025-11-04T12:19:51.766878240Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 12:19:51.775317 containerd[1562]: time="2025-11-04T12:19:51.775279640Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.44µs" Nov 4 12:19:51.775317 containerd[1562]: time="2025-11-04T12:19:51.775309960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 12:19:51.775390 containerd[1562]: time="2025-11-04T12:19:51.775327320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 12:19:51.775475 containerd[1562]: time="2025-11-04T12:19:51.775455280Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 12:19:51.775475 containerd[1562]: time="2025-11-04T12:19:51.775473840Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 12:19:51.775531 containerd[1562]: time="2025-11-04T12:19:51.775495560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 12:19:51.775553 containerd[1562]: time="2025-11-04T12:19:51.775543560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 12:19:51.775571 containerd[1562]: time="2025-11-04T12:19:51.775554800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 12:19:51.775734 containerd[1562]: time="2025-11-04T12:19:51.775710320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 12:19:51.775734 containerd[1562]: time="2025-11-04T12:19:51.775729560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 12:19:51.775784 containerd[1562]: time="2025-11-04T12:19:51.775740560Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 12:19:51.775784 containerd[1562]: time="2025-11-04T12:19:51.775748480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 12:19:51.775829 containerd[1562]: time="2025-11-04T12:19:51.775812880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 12:19:51.776002 containerd[1562]: time="2025-11-04T12:19:51.775981520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 12:19:51.776030 containerd[1562]: time="2025-11-04T12:19:51.776012080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 12:19:51.776030 containerd[1562]: time="2025-11-04T12:19:51.776022880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 12:19:51.776163 containerd[1562]: time="2025-11-04T12:19:51.776067360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 12:19:51.776342 containerd[1562]: time="2025-11-04T12:19:51.776321520Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 12:19:51.776406 containerd[1562]: time="2025-11-04T12:19:51.776388680Z" level=info msg="metadata content store policy set" policy=shared Nov 4 12:19:51.779736 containerd[1562]: time="2025-11-04T12:19:51.779624680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 12:19:51.779736 containerd[1562]: time="2025-11-04T12:19:51.779692440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 12:19:51.779736 containerd[1562]: time="2025-11-04T12:19:51.779706760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 12:19:51.779736 containerd[1562]: time="2025-11-04T12:19:51.779717920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 12:19:51.779736 containerd[1562]: time="2025-11-04T12:19:51.779729200Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 12:19:51.779866 containerd[1562]: time="2025-11-04T12:19:51.779741880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 12:19:51.779866 containerd[1562]: time="2025-11-04T12:19:51.779775680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 12:19:51.779866 containerd[1562]: time="2025-11-04T12:19:51.779787520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 12:19:51.779866 containerd[1562]: time="2025-11-04T12:19:51.779797960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 12:19:51.779866 containerd[1562]: time="2025-11-04T12:19:51.779807480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 12:19:51.779866 containerd[1562]: time="2025-11-04T12:19:51.779815680Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 12:19:51.779866 containerd[1562]: time="2025-11-04T12:19:51.779826960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 12:19:51.779976 containerd[1562]: time="2025-11-04T12:19:51.779931120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 12:19:51.779976 containerd[1562]: time="2025-11-04T12:19:51.779950160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 12:19:51.779976 containerd[1562]: time="2025-11-04T12:19:51.779968160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 12:19:51.780024 containerd[1562]: time="2025-11-04T12:19:51.779978880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 12:19:51.780024 containerd[1562]: time="2025-11-04T12:19:51.779989840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 12:19:51.780024 containerd[1562]: time="2025-11-04T12:19:51.779999200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 12:19:51.780024 containerd[1562]: time="2025-11-04T12:19:51.780010080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 12:19:51.780024 containerd[1562]: time="2025-11-04T12:19:51.780020520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 12:19:51.780621 containerd[1562]: time="2025-11-04T12:19:51.780030640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 12:19:51.780621 containerd[1562]: time="2025-11-04T12:19:51.780041840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 12:19:51.780621 containerd[1562]: time="2025-11-04T12:19:51.780051280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 12:19:51.780621 containerd[1562]: time="2025-11-04T12:19:51.780239880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 12:19:51.780621 containerd[1562]: time="2025-11-04T12:19:51.780254720Z" level=info msg="Start snapshots syncer" Nov 4 12:19:51.780621 containerd[1562]: time="2025-11-04T12:19:51.780290360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 12:19:51.780733 containerd[1562]: time="2025-11-04T12:19:51.780605120Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 12:19:51.780733 containerd[1562]: time="2025-11-04T12:19:51.780667480Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 12:19:51.780823 containerd[1562]: time="2025-11-04T12:19:51.780746200Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.780888640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.780918480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.780931920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.780944520Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.780957040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.780968240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.780978480Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.781000360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.781011200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.781020880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.781056320Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 12:19:51.781089 containerd[1562]: time="2025-11-04T12:19:51.781070560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781078560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781118520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781127000Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781140360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781151600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781226320Z" level=info msg="runtime interface created" Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781231480Z" level=info msg="created NRI interface" Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781239160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781249320Z" level=info msg="Connect containerd service" Nov 4 12:19:51.781301 containerd[1562]: time="2025-11-04T12:19:51.781274440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 12:19:51.782001 containerd[1562]: time="2025-11-04T12:19:51.781952320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 12:19:51.846811 containerd[1562]: time="2025-11-04T12:19:51.846652800Z" level=info msg="Start subscribing containerd event" Nov 4 12:19:51.846811 containerd[1562]: time="2025-11-04T12:19:51.846710600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 12:19:51.846811 containerd[1562]: time="2025-11-04T12:19:51.846721800Z" level=info msg="Start recovering state" Nov 4 12:19:51.846811 containerd[1562]: time="2025-11-04T12:19:51.846757280Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 12:19:51.846811 containerd[1562]: time="2025-11-04T12:19:51.846822520Z" level=info msg="Start event monitor" Nov 4 12:19:51.846983 containerd[1562]: time="2025-11-04T12:19:51.846836760Z" level=info msg="Start cni network conf syncer for default" Nov 4 12:19:51.846983 containerd[1562]: time="2025-11-04T12:19:51.846844160Z" level=info msg="Start streaming server" Nov 4 12:19:51.846983 containerd[1562]: time="2025-11-04T12:19:51.846853120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 12:19:51.846983 containerd[1562]: time="2025-11-04T12:19:51.846862760Z" level=info msg="runtime interface starting up..." Nov 4 12:19:51.846983 containerd[1562]: time="2025-11-04T12:19:51.846869560Z" level=info msg="starting plugins..." Nov 4 12:19:51.846983 containerd[1562]: time="2025-11-04T12:19:51.846882480Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 12:19:51.847072 containerd[1562]: time="2025-11-04T12:19:51.846999080Z" level=info msg="containerd successfully booted in 0.080977s" Nov 4 12:19:51.847124 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 12:19:51.953911 tar[1550]: linux-arm64/README.md Nov 4 12:19:51.971154 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 12:19:52.548535 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 12:19:52.568154 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 12:19:52.570530 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 12:19:52.589184 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 12:19:52.589381 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 12:19:52.592702 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 12:19:52.612634 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 12:19:52.615209 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 12:19:52.617182 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 4 12:19:52.618398 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 12:19:53.308200 systemd-networkd[1467]: eth0: Gained IPv6LL Nov 4 12:19:53.310530 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 12:19:53.312066 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 12:19:53.314093 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 12:19:53.316068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:19:53.317961 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 12:19:53.342491 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 12:19:53.342677 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 12:19:53.346193 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 12:19:53.348349 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 12:19:53.842812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:19:53.844198 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 12:19:53.845560 systemd[1]: Startup finished in 1.191s (kernel) + 4.803s (initrd) + 3.981s (userspace) = 9.976s. Nov 4 12:19:53.846805 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 12:19:54.151155 kubelet[1668]: E1104 12:19:54.151044 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 12:19:54.153452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 12:19:54.153587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 12:19:54.153965 systemd[1]: kubelet.service: Consumed 683ms CPU time, 248.9M memory peak. Nov 4 12:19:56.090900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 12:19:56.092592 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:46322.service - OpenSSH per-connection server daemon (10.0.0.1:46322). Nov 4 12:19:56.171174 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 46322 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:19:56.171420 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:19:56.186164 systemd-logind[1542]: New session 1 of user core. Nov 4 12:19:56.187035 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 12:19:56.188705 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 12:19:56.212127 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 12:19:56.213988 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 12:19:56.230046 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 12:19:56.232488 systemd-logind[1542]: New session c1 of user core. Nov 4 12:19:56.347554 systemd[1687]: Queued start job for default target default.target. Nov 4 12:19:56.356998 systemd[1687]: Created slice app.slice - User Application Slice. Nov 4 12:19:56.357028 systemd[1687]: Reached target paths.target - Paths. Nov 4 12:19:56.357072 systemd[1687]: Reached target timers.target - Timers. Nov 4 12:19:56.358229 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 12:19:56.367615 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 12:19:56.367680 systemd[1687]: Reached target sockets.target - Sockets. Nov 4 12:19:56.367716 systemd[1687]: Reached target basic.target - Basic System. Nov 4 12:19:56.367743 systemd[1687]: Reached target default.target - Main User Target. Nov 4 12:19:56.367768 systemd[1687]: Startup finished in 129ms. Nov 4 12:19:56.368064 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 12:19:56.369581 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 12:19:56.438771 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:46332.service - OpenSSH per-connection server daemon (10.0.0.1:46332). Nov 4 12:19:56.486973 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 46332 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:19:56.488163 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:19:56.492238 systemd-logind[1542]: New session 2 of user core. Nov 4 12:19:56.501291 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 12:19:56.554997 sshd[1701]: Connection closed by 10.0.0.1 port 46332 Nov 4 12:19:56.555390 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Nov 4 12:19:56.567339 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:46332.service: Deactivated successfully. Nov 4 12:19:56.569448 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 12:19:56.572134 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Nov 4 12:19:56.578495 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:46336.service - OpenSSH per-connection server daemon (10.0.0.1:46336). Nov 4 12:19:56.579378 systemd-logind[1542]: Removed session 2. Nov 4 12:19:56.659500 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 46336 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:19:56.660792 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:19:56.666429 systemd-logind[1542]: New session 3 of user core. Nov 4 12:19:56.681239 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 12:19:56.727818 sshd[1710]: Connection closed by 10.0.0.1 port 46336 Nov 4 12:19:56.728193 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Nov 4 12:19:56.748188 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:46336.service: Deactivated successfully. Nov 4 12:19:56.750803 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 12:19:56.752181 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Nov 4 12:19:56.753544 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:46352.service - OpenSSH per-connection server daemon (10.0.0.1:46352). Nov 4 12:19:56.755379 systemd-logind[1542]: Removed session 3. Nov 4 12:19:56.806358 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 46352 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:19:56.807565 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:19:56.811374 systemd-logind[1542]: New session 4 of user core. Nov 4 12:19:56.834295 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 12:19:56.885774 sshd[1719]: Connection closed by 10.0.0.1 port 46352 Nov 4 12:19:56.886207 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Nov 4 12:19:56.902015 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:46352.service: Deactivated successfully. Nov 4 12:19:56.903677 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 12:19:56.905147 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Nov 4 12:19:56.906269 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:46356.service - OpenSSH per-connection server daemon (10.0.0.1:46356). Nov 4 12:19:56.910163 systemd-logind[1542]: Removed session 4. Nov 4 12:19:56.965444 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 46356 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:19:56.964182 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:19:56.969293 systemd-logind[1542]: New session 5 of user core. Nov 4 12:19:56.979281 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 12:19:57.035065 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 12:19:57.035337 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:19:57.049184 sudo[1729]: pam_unix(sudo:session): session closed for user root Nov 4 12:19:57.050703 sshd[1728]: Connection closed by 10.0.0.1 port 46356 Nov 4 12:19:57.052120 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Nov 4 12:19:57.064932 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:46356.service: Deactivated successfully. Nov 4 12:19:57.068242 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 12:19:57.069339 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Nov 4 12:19:57.072911 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:46364.service - OpenSSH per-connection server daemon (10.0.0.1:46364). Nov 4 12:19:57.074313 systemd-logind[1542]: Removed session 5. Nov 4 12:19:57.139329 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 46364 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:19:57.143439 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:19:57.147336 systemd-logind[1542]: New session 6 of user core. Nov 4 12:19:57.153234 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 12:19:57.204699 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 12:19:57.204949 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:19:57.258463 sudo[1740]: pam_unix(sudo:session): session closed for user root Nov 4 12:19:57.264347 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 12:19:57.264601 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:19:57.272892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 12:19:57.309717 augenrules[1762]: No rules Nov 4 12:19:57.310868 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 12:19:57.313174 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 12:19:57.314692 sudo[1739]: pam_unix(sudo:session): session closed for user root Nov 4 12:19:57.317154 sshd[1738]: Connection closed by 10.0.0.1 port 46364 Nov 4 12:19:57.317007 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Nov 4 12:19:57.328111 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:46364.service: Deactivated successfully. Nov 4 12:19:57.329470 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 12:19:57.330180 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Nov 4 12:19:57.332262 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:46370.service - OpenSSH per-connection server daemon (10.0.0.1:46370). Nov 4 12:19:57.336207 systemd-logind[1542]: Removed session 6. Nov 4 12:19:57.382378 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 46370 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:19:57.383566 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:19:57.387144 systemd-logind[1542]: New session 7 of user core. Nov 4 12:19:57.397230 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 12:19:57.447360 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 12:19:57.447616 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:19:57.713815 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 12:19:57.728333 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 12:19:57.923419 dockerd[1795]: time="2025-11-04T12:19:57.923360632Z" level=info msg="Starting up" Nov 4 12:19:57.924067 dockerd[1795]: time="2025-11-04T12:19:57.924048607Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 12:19:57.933809 dockerd[1795]: time="2025-11-04T12:19:57.933761714Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 12:19:58.071809 dockerd[1795]: time="2025-11-04T12:19:58.071704906Z" level=info msg="Loading containers: start." Nov 4 12:19:58.079096 kernel: Initializing XFRM netlink socket Nov 4 12:19:58.253188 systemd-networkd[1467]: docker0: Link UP Nov 4 12:19:58.256634 dockerd[1795]: time="2025-11-04T12:19:58.256591791Z" level=info msg="Loading containers: done." Nov 4 12:19:58.269570 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3878437041-merged.mount: Deactivated successfully. Nov 4 12:19:58.273004 dockerd[1795]: time="2025-11-04T12:19:58.272963100Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 12:19:58.273089 dockerd[1795]: time="2025-11-04T12:19:58.273037378Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 12:19:58.273157 dockerd[1795]: time="2025-11-04T12:19:58.273126996Z" level=info msg="Initializing buildkit" Nov 4 12:19:58.292891 dockerd[1795]: time="2025-11-04T12:19:58.292850222Z" level=info msg="Completed buildkit initialization" Nov 4 12:19:58.298829 dockerd[1795]: time="2025-11-04T12:19:58.298627599Z" level=info msg="Daemon has completed initialization" Nov 4 12:19:58.298829 dockerd[1795]: time="2025-11-04T12:19:58.298719000Z" level=info msg="API listen on /run/docker.sock" Nov 4 12:19:58.299488 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 12:19:58.785468 containerd[1562]: time="2025-11-04T12:19:58.785259363Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 12:19:59.454181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49416975.mount: Deactivated successfully. Nov 4 12:20:00.478904 containerd[1562]: time="2025-11-04T12:20:00.478849260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:00.480703 containerd[1562]: time="2025-11-04T12:20:00.480663976Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Nov 4 12:20:00.481424 containerd[1562]: time="2025-11-04T12:20:00.481392150Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:00.484498 containerd[1562]: time="2025-11-04T12:20:00.484464766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:00.485433 containerd[1562]: time="2025-11-04T12:20:00.485401597Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.700100996s" Nov 4 12:20:00.485433 containerd[1562]: time="2025-11-04T12:20:00.485434328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 4 12:20:00.486094 containerd[1562]: time="2025-11-04T12:20:00.486051640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 12:20:01.485315 containerd[1562]: time="2025-11-04T12:20:01.485246399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:01.485710 containerd[1562]: time="2025-11-04T12:20:01.485685984Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Nov 4 12:20:01.486624 containerd[1562]: time="2025-11-04T12:20:01.486601731Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:01.488816 containerd[1562]: time="2025-11-04T12:20:01.488789719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:01.490684 containerd[1562]: time="2025-11-04T12:20:01.490649678Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.004558146s" Nov 4 12:20:01.490739 containerd[1562]: time="2025-11-04T12:20:01.490687725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 4 12:20:01.491195 containerd[1562]: time="2025-11-04T12:20:01.491170446Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 12:20:02.435110 containerd[1562]: time="2025-11-04T12:20:02.434974717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:02.435516 containerd[1562]: time="2025-11-04T12:20:02.435474836Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Nov 4 12:20:02.436391 containerd[1562]: time="2025-11-04T12:20:02.436348275Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:02.438798 containerd[1562]: time="2025-11-04T12:20:02.438769976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:02.440380 containerd[1562]: time="2025-11-04T12:20:02.439753147Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 948.550691ms" Nov 4 12:20:02.440380 containerd[1562]: time="2025-11-04T12:20:02.439786170Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 4 12:20:02.440380 containerd[1562]: time="2025-11-04T12:20:02.440217936Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 12:20:03.554880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078100640.mount: Deactivated successfully. Nov 4 12:20:03.715499 containerd[1562]: time="2025-11-04T12:20:03.715449269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:03.716095 containerd[1562]: time="2025-11-04T12:20:03.716056106Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Nov 4 12:20:03.717100 containerd[1562]: time="2025-11-04T12:20:03.716932120Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:03.721723 containerd[1562]: time="2025-11-04T12:20:03.721675730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:03.722426 containerd[1562]: time="2025-11-04T12:20:03.722397109Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.28215251s" Nov 4 12:20:03.722569 containerd[1562]: time="2025-11-04T12:20:03.722520929Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 4 12:20:03.723010 containerd[1562]: time="2025-11-04T12:20:03.722990329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 12:20:04.233525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 12:20:04.235288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:20:04.243780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006899363.mount: Deactivated successfully. Nov 4 12:20:04.395542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:20:04.407359 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 12:20:04.445223 kubelet[2110]: E1104 12:20:04.445178 2110 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 12:20:04.448032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 12:20:04.448178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 12:20:04.450171 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.9M memory peak. Nov 4 12:20:05.162684 containerd[1562]: time="2025-11-04T12:20:05.162640980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:05.163638 containerd[1562]: time="2025-11-04T12:20:05.163608871Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Nov 4 12:20:05.164366 containerd[1562]: time="2025-11-04T12:20:05.164322474Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:05.167842 containerd[1562]: time="2025-11-04T12:20:05.167810971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:05.169526 containerd[1562]: time="2025-11-04T12:20:05.169497885Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.446476731s" Nov 4 12:20:05.169578 containerd[1562]: time="2025-11-04T12:20:05.169532481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 4 12:20:05.170056 containerd[1562]: time="2025-11-04T12:20:05.170025035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 12:20:05.560124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199365574.mount: Deactivated successfully. Nov 4 12:20:05.566659 containerd[1562]: time="2025-11-04T12:20:05.566608096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:05.567420 containerd[1562]: time="2025-11-04T12:20:05.567389973Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Nov 4 12:20:05.568094 containerd[1562]: time="2025-11-04T12:20:05.568057939Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:05.569975 containerd[1562]: time="2025-11-04T12:20:05.569933856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:05.570542 containerd[1562]: time="2025-11-04T12:20:05.570504770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 400.452073ms" Nov 4 12:20:05.570580 containerd[1562]: time="2025-11-04T12:20:05.570540243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 4 12:20:05.571114 containerd[1562]: time="2025-11-04T12:20:05.570962689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 12:20:08.696744 containerd[1562]: time="2025-11-04T12:20:08.696694869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:08.697757 containerd[1562]: time="2025-11-04T12:20:08.697535772Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Nov 4 12:20:08.698414 containerd[1562]: time="2025-11-04T12:20:08.698392358Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:08.701100 containerd[1562]: time="2025-11-04T12:20:08.700963192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:08.703059 containerd[1562]: time="2025-11-04T12:20:08.703023651Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.132027872s" Nov 4 12:20:08.703142 containerd[1562]: time="2025-11-04T12:20:08.703060962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 4 12:20:13.520164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:20:13.520319 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.9M memory peak. Nov 4 12:20:13.522261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:20:13.549262 systemd[1]: Reload requested from client PID 2233 ('systemctl') (unit session-7.scope)... Nov 4 12:20:13.549280 systemd[1]: Reloading... Nov 4 12:20:13.617117 zram_generator::config[2275]: No configuration found. Nov 4 12:20:13.840003 systemd[1]: Reloading finished in 290 ms. Nov 4 12:20:13.899229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:20:13.902063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:20:13.903935 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 12:20:13.904166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:20:13.904211 systemd[1]: kubelet.service: Consumed 96ms CPU time, 95.2M memory peak. Nov 4 12:20:13.905824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:20:14.041872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:20:14.059379 (kubelet)[2324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 12:20:14.091242 kubelet[2324]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 12:20:14.091242 kubelet[2324]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:20:14.091765 kubelet[2324]: I1104 12:20:14.091720 2324 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 12:20:15.132757 kubelet[2324]: I1104 12:20:15.132708 2324 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 12:20:15.132757 kubelet[2324]: I1104 12:20:15.132744 2324 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 12:20:15.133912 kubelet[2324]: I1104 12:20:15.133871 2324 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 12:20:15.133912 kubelet[2324]: I1104 12:20:15.133896 2324 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 12:20:15.134175 kubelet[2324]: I1104 12:20:15.134151 2324 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 12:20:15.248108 kubelet[2324]: E1104 12:20:15.247339 2324 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 12:20:15.248108 kubelet[2324]: I1104 12:20:15.248059 2324 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 12:20:15.251273 kubelet[2324]: I1104 12:20:15.251253 2324 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 12:20:15.253400 kubelet[2324]: I1104 12:20:15.253374 2324 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 12:20:15.253581 kubelet[2324]: I1104 12:20:15.253561 2324 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 12:20:15.253727 kubelet[2324]: I1104 12:20:15.253583 2324 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 12:20:15.253816 kubelet[2324]: I1104 12:20:15.253729 2324 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 12:20:15.253816 kubelet[2324]: I1104 12:20:15.253737 2324 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 12:20:15.253858 kubelet[2324]: I1104 12:20:15.253834 2324 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 12:20:15.256362 kubelet[2324]: I1104 12:20:15.256331 2324 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:20:15.257451 kubelet[2324]: I1104 12:20:15.257419 2324 kubelet.go:475] "Attempting to sync node with API server" Nov 4 12:20:15.257451 kubelet[2324]: I1104 12:20:15.257446 2324 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 12:20:15.258111 kubelet[2324]: I1104 12:20:15.257916 2324 kubelet.go:387] "Adding apiserver pod source" Nov 4 12:20:15.258111 kubelet[2324]: I1104 12:20:15.257945 2324 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 12:20:15.258111 kubelet[2324]: E1104 12:20:15.257913 2324 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 12:20:15.258387 kubelet[2324]: E1104 12:20:15.258354 2324 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 12:20:15.259456 kubelet[2324]: I1104 12:20:15.259433 2324 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 12:20:15.260975 kubelet[2324]: I1104 12:20:15.260384 2324 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 12:20:15.260975 kubelet[2324]: I1104 12:20:15.260421 2324 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 12:20:15.260975 kubelet[2324]: W1104 12:20:15.260464 2324 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 12:20:15.263471 kubelet[2324]: I1104 12:20:15.263437 2324 server.go:1262] "Started kubelet" Nov 4 12:20:15.263983 kubelet[2324]: I1104 12:20:15.263951 2324 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 12:20:15.264037 kubelet[2324]: I1104 12:20:15.263982 2324 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 12:20:15.264891 kubelet[2324]: I1104 12:20:15.264859 2324 server.go:310] "Adding debug handlers to kubelet server" Nov 4 12:20:15.267069 kubelet[2324]: E1104 12:20:15.265981 2324 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874cd0e3e0ddd7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 12:20:15.263038842 +0000 UTC m=+1.200941119,LastTimestamp:2025-11-04 12:20:15.263038842 +0000 UTC m=+1.200941119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 12:20:15.267319 kubelet[2324]: I1104 12:20:15.266974 2324 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 12:20:15.267425 kubelet[2324]: I1104 12:20:15.267412 2324 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 12:20:15.267753 kubelet[2324]: E1104 12:20:15.267730 2324 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:20:15.267753 kubelet[2324]: I1104 12:20:15.267756 2324 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 12:20:15.267989 kubelet[2324]: I1104 12:20:15.267898 2324 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 12:20:15.267989 kubelet[2324]: I1104 12:20:15.267950 2324 reconciler.go:29] "Reconciler: start to sync state" Nov 4 12:20:15.268078 kubelet[2324]: I1104 12:20:15.268063 2324 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 12:20:15.268412 kubelet[2324]: E1104 12:20:15.268375 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Nov 4 12:20:15.268519 kubelet[2324]: I1104 12:20:15.268502 2324 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 12:20:15.268708 kubelet[2324]: E1104 12:20:15.268687 2324 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 12:20:15.269072 kubelet[2324]: I1104 12:20:15.269021 2324 factory.go:223] Registration of the systemd container factory successfully Nov 4 12:20:15.269157 kubelet[2324]: I1104 12:20:15.269112 2324 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 12:20:15.269786 kubelet[2324]: E1104 12:20:15.269764 2324 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 12:20:15.270248 kubelet[2324]: I1104 12:20:15.270214 2324 factory.go:223] Registration of the containerd container factory successfully Nov 4 12:20:15.280272 kubelet[2324]: I1104 12:20:15.280240 2324 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 12:20:15.281229 kubelet[2324]: I1104 12:20:15.281207 2324 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 12:20:15.281229 kubelet[2324]: I1104 12:20:15.281227 2324 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 12:20:15.281310 kubelet[2324]: I1104 12:20:15.281256 2324 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 12:20:15.281310 kubelet[2324]: E1104 12:20:15.281290 2324 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 12:20:15.284123 kubelet[2324]: E1104 12:20:15.284078 2324 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 12:20:15.284274 kubelet[2324]: I1104 12:20:15.284260 2324 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 12:20:15.284340 kubelet[2324]: I1104 12:20:15.284330 2324 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 12:20:15.284413 kubelet[2324]: I1104 12:20:15.284405 2324 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:20:15.286135 kubelet[2324]: I1104 12:20:15.286119 2324 policy_none.go:49] "None policy: Start" Nov 4 12:20:15.286210 kubelet[2324]: I1104 12:20:15.286199 2324 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 12:20:15.286276 kubelet[2324]: I1104 12:20:15.286263 2324 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 12:20:15.288104 kubelet[2324]: I1104 12:20:15.287887 2324 policy_none.go:47] "Start" Nov 4 12:20:15.291463 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 12:20:15.304481 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 12:20:15.307465 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 12:20:15.326045 kubelet[2324]: E1104 12:20:15.326012 2324 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 12:20:15.326274 kubelet[2324]: I1104 12:20:15.326251 2324 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 12:20:15.326309 kubelet[2324]: I1104 12:20:15.326270 2324 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 12:20:15.326605 kubelet[2324]: I1104 12:20:15.326504 2324 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 12:20:15.327445 kubelet[2324]: E1104 12:20:15.327425 2324 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 12:20:15.327569 kubelet[2324]: E1104 12:20:15.327556 2324 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 12:20:15.393148 systemd[1]: Created slice kubepods-burstable-podc81b20bfd83bd65029d67ba85a99a38d.slice - libcontainer container kubepods-burstable-podc81b20bfd83bd65029d67ba85a99a38d.slice. Nov 4 12:20:15.405395 kubelet[2324]: E1104 12:20:15.405351 2324 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:20:15.406535 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 4 12:20:15.424423 kubelet[2324]: E1104 12:20:15.424371 2324 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:20:15.426835 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 4 12:20:15.428020 kubelet[2324]: I1104 12:20:15.428000 2324 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:20:15.428651 kubelet[2324]: E1104 12:20:15.428614 2324 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:20:15.428803 kubelet[2324]: E1104 12:20:15.428769 2324 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 4 12:20:15.468982 kubelet[2324]: I1104 12:20:15.468922 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c81b20bfd83bd65029d67ba85a99a38d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c81b20bfd83bd65029d67ba85a99a38d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:15.468982 kubelet[2324]: I1104 12:20:15.468976 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:15.469075 kubelet[2324]: I1104 12:20:15.468995 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:15.469075 kubelet[2324]: I1104 12:20:15.469010 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:15.469075 kubelet[2324]: I1104 12:20:15.469026 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:15.469075 kubelet[2324]: E1104 12:20:15.469022 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Nov 4 12:20:15.469075 kubelet[2324]: I1104 12:20:15.469039 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c81b20bfd83bd65029d67ba85a99a38d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c81b20bfd83bd65029d67ba85a99a38d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:15.469247 kubelet[2324]: I1104 12:20:15.469110 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c81b20bfd83bd65029d67ba85a99a38d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c81b20bfd83bd65029d67ba85a99a38d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:15.469247 kubelet[2324]: I1104 12:20:15.469147 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:15.469247 kubelet[2324]: I1104 12:20:15.469167 2324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:15.630804 kubelet[2324]: I1104 12:20:15.630753 2324 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:20:15.631130 kubelet[2324]: E1104 12:20:15.631099 2324 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 4 12:20:15.708621 kubelet[2324]: E1104 12:20:15.708522 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:15.709463 containerd[1562]: time="2025-11-04T12:20:15.709405880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c81b20bfd83bd65029d67ba85a99a38d,Namespace:kube-system,Attempt:0,}" Nov 4 12:20:15.727421 kubelet[2324]: E1104 12:20:15.727386 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:15.727963 containerd[1562]: time="2025-11-04T12:20:15.727766814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 4 12:20:15.730507 kubelet[2324]: E1104 12:20:15.730483 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:15.730882 containerd[1562]: time="2025-11-04T12:20:15.730794247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 4 12:20:15.870399 kubelet[2324]: E1104 12:20:15.870356 2324 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Nov 4 12:20:16.033116 kubelet[2324]: I1104 12:20:16.032994 2324 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:20:16.033356 kubelet[2324]: E1104 12:20:16.033329 2324 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Nov 4 12:20:16.083281 kubelet[2324]: E1104 12:20:16.083246 2324 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 12:20:16.118978 kubelet[2324]: E1104 12:20:16.118940 2324 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 12:20:16.209230 kubelet[2324]: E1104 12:20:16.209184 2324 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 12:20:16.230984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283100345.mount: Deactivated successfully. Nov 4 12:20:16.238408 containerd[1562]: time="2025-11-04T12:20:16.238360987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:20:16.240603 containerd[1562]: time="2025-11-04T12:20:16.240578723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 12:20:16.241381 containerd[1562]: time="2025-11-04T12:20:16.241346171Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:20:16.242816 containerd[1562]: time="2025-11-04T12:20:16.242763445Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:20:16.243694 containerd[1562]: time="2025-11-04T12:20:16.243493964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 4 12:20:16.244221 containerd[1562]: time="2025-11-04T12:20:16.244195387Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:20:16.245024 containerd[1562]: time="2025-11-04T12:20:16.244997727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 12:20:16.245828 containerd[1562]: time="2025-11-04T12:20:16.245779644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:20:16.246702 containerd[1562]: time="2025-11-04T12:20:16.246402411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.938232ms" Nov 4 12:20:16.249065 containerd[1562]: time="2025-11-04T12:20:16.249033846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.629793ms" Nov 4 12:20:16.249831 containerd[1562]: time="2025-11-04T12:20:16.249799576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 537.891607ms" Nov 4 12:20:16.271466 containerd[1562]: time="2025-11-04T12:20:16.271400644Z" level=info msg="connecting to shim 218cffc0eff8a03bf6d295c8f690527471289d78100c52bf77ec0f1232805503" address="unix:///run/containerd/s/5426626fe874356835fffd454f99b8c74596fc76c3d190fb4ff0f971045e46c5" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:16.272625 containerd[1562]: time="2025-11-04T12:20:16.272577755Z" level=info msg="connecting to shim eb55e1fefe930b684f583f22e61e2efd94dfd5d5fd85a465648b71a19809d0b0" address="unix:///run/containerd/s/4cc5a880065ebc62c0cb6a5c68c22623d731058a35aa1e214f097d5b5c635686" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:16.276098 containerd[1562]: time="2025-11-04T12:20:16.275705942Z" level=info msg="connecting to shim d299de8ebb3f1f0cb698adb6920ae60e21d109420c9c8d0f7bc7ed034913937c" address="unix:///run/containerd/s/3605b0d3842f2d9e9d3d2629239ff4df63a45b7b6b15ff262b0260cade81bb7c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:16.291232 systemd[1]: Started cri-containerd-218cffc0eff8a03bf6d295c8f690527471289d78100c52bf77ec0f1232805503.scope - libcontainer container 218cffc0eff8a03bf6d295c8f690527471289d78100c52bf77ec0f1232805503. Nov 4 12:20:16.294584 systemd[1]: Started cri-containerd-eb55e1fefe930b684f583f22e61e2efd94dfd5d5fd85a465648b71a19809d0b0.scope - libcontainer container eb55e1fefe930b684f583f22e61e2efd94dfd5d5fd85a465648b71a19809d0b0. Nov 4 12:20:16.298539 systemd[1]: Started cri-containerd-d299de8ebb3f1f0cb698adb6920ae60e21d109420c9c8d0f7bc7ed034913937c.scope - libcontainer container d299de8ebb3f1f0cb698adb6920ae60e21d109420c9c8d0f7bc7ed034913937c. Nov 4 12:20:16.336186 containerd[1562]: time="2025-11-04T12:20:16.335802737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"218cffc0eff8a03bf6d295c8f690527471289d78100c52bf77ec0f1232805503\"" Nov 4 12:20:16.338028 kubelet[2324]: E1104 12:20:16.337987 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:16.338749 containerd[1562]: time="2025-11-04T12:20:16.338718538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c81b20bfd83bd65029d67ba85a99a38d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb55e1fefe930b684f583f22e61e2efd94dfd5d5fd85a465648b71a19809d0b0\"" Nov 4 12:20:16.338919 containerd[1562]: time="2025-11-04T12:20:16.338890317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d299de8ebb3f1f0cb698adb6920ae60e21d109420c9c8d0f7bc7ed034913937c\"" Nov 4 12:20:16.340045 kubelet[2324]: E1104 12:20:16.340015 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:16.340322 kubelet[2324]: E1104 12:20:16.340181 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:16.343271 containerd[1562]: time="2025-11-04T12:20:16.343236261Z" level=info msg="CreateContainer within sandbox \"218cffc0eff8a03bf6d295c8f690527471289d78100c52bf77ec0f1232805503\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 12:20:16.344322 containerd[1562]: time="2025-11-04T12:20:16.344292472Z" level=info msg="CreateContainer within sandbox \"d299de8ebb3f1f0cb698adb6920ae60e21d109420c9c8d0f7bc7ed034913937c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 12:20:16.346450 containerd[1562]: time="2025-11-04T12:20:16.346423878Z" level=info msg="CreateContainer within sandbox \"eb55e1fefe930b684f583f22e61e2efd94dfd5d5fd85a465648b71a19809d0b0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 12:20:16.351880 containerd[1562]: time="2025-11-04T12:20:16.351852292Z" level=info msg="Container f70c715aff65dd0a506e7459328948db65ccbe28d0b09c87557d3d1adf85f6ea: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:16.354566 containerd[1562]: time="2025-11-04T12:20:16.354542359Z" level=info msg="Container e20047aee83323a07fd21f3915f75528877bf62ee21931bb231d991901bd32f2: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:16.358290 containerd[1562]: time="2025-11-04T12:20:16.358265256Z" level=info msg="Container df21de395e85979bd39d29d742bcb90481a8bd69e2d3bd045203686c7af7a52f: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:16.359510 containerd[1562]: time="2025-11-04T12:20:16.359481815Z" level=info msg="CreateContainer within sandbox \"218cffc0eff8a03bf6d295c8f690527471289d78100c52bf77ec0f1232805503\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f70c715aff65dd0a506e7459328948db65ccbe28d0b09c87557d3d1adf85f6ea\"" Nov 4 12:20:16.360255 containerd[1562]: time="2025-11-04T12:20:16.360228960Z" level=info msg="StartContainer for \"f70c715aff65dd0a506e7459328948db65ccbe28d0b09c87557d3d1adf85f6ea\"" Nov 4 12:20:16.361395 containerd[1562]: time="2025-11-04T12:20:16.361372899Z" level=info msg="connecting to shim f70c715aff65dd0a506e7459328948db65ccbe28d0b09c87557d3d1adf85f6ea" address="unix:///run/containerd/s/5426626fe874356835fffd454f99b8c74596fc76c3d190fb4ff0f971045e46c5" protocol=ttrpc version=3 Nov 4 12:20:16.363118 containerd[1562]: time="2025-11-04T12:20:16.363066985Z" level=info msg="CreateContainer within sandbox \"d299de8ebb3f1f0cb698adb6920ae60e21d109420c9c8d0f7bc7ed034913937c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e20047aee83323a07fd21f3915f75528877bf62ee21931bb231d991901bd32f2\"" Nov 4 12:20:16.363511 containerd[1562]: time="2025-11-04T12:20:16.363476209Z" level=info msg="StartContainer for \"e20047aee83323a07fd21f3915f75528877bf62ee21931bb231d991901bd32f2\"" Nov 4 12:20:16.364726 containerd[1562]: time="2025-11-04T12:20:16.364691169Z" level=info msg="connecting to shim e20047aee83323a07fd21f3915f75528877bf62ee21931bb231d991901bd32f2" address="unix:///run/containerd/s/3605b0d3842f2d9e9d3d2629239ff4df63a45b7b6b15ff262b0260cade81bb7c" protocol=ttrpc version=3 Nov 4 12:20:16.366450 containerd[1562]: time="2025-11-04T12:20:16.366420306Z" level=info msg="CreateContainer within sandbox \"eb55e1fefe930b684f583f22e61e2efd94dfd5d5fd85a465648b71a19809d0b0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"df21de395e85979bd39d29d742bcb90481a8bd69e2d3bd045203686c7af7a52f\"" Nov 4 12:20:16.366893 containerd[1562]: time="2025-11-04T12:20:16.366869337Z" level=info msg="StartContainer for \"df21de395e85979bd39d29d742bcb90481a8bd69e2d3bd045203686c7af7a52f\"" Nov 4 12:20:16.368193 containerd[1562]: time="2025-11-04T12:20:16.368169707Z" level=info msg="connecting to shim df21de395e85979bd39d29d742bcb90481a8bd69e2d3bd045203686c7af7a52f" address="unix:///run/containerd/s/4cc5a880065ebc62c0cb6a5c68c22623d731058a35aa1e214f097d5b5c635686" protocol=ttrpc version=3 Nov 4 12:20:16.382241 systemd[1]: Started cri-containerd-f70c715aff65dd0a506e7459328948db65ccbe28d0b09c87557d3d1adf85f6ea.scope - libcontainer container f70c715aff65dd0a506e7459328948db65ccbe28d0b09c87557d3d1adf85f6ea. Nov 4 12:20:16.386353 systemd[1]: Started cri-containerd-df21de395e85979bd39d29d742bcb90481a8bd69e2d3bd045203686c7af7a52f.scope - libcontainer container df21de395e85979bd39d29d742bcb90481a8bd69e2d3bd045203686c7af7a52f. Nov 4 12:20:16.387489 systemd[1]: Started cri-containerd-e20047aee83323a07fd21f3915f75528877bf62ee21931bb231d991901bd32f2.scope - libcontainer container e20047aee83323a07fd21f3915f75528877bf62ee21931bb231d991901bd32f2. Nov 4 12:20:16.430937 containerd[1562]: time="2025-11-04T12:20:16.430758292Z" level=info msg="StartContainer for \"df21de395e85979bd39d29d742bcb90481a8bd69e2d3bd045203686c7af7a52f\" returns successfully" Nov 4 12:20:16.432658 containerd[1562]: time="2025-11-04T12:20:16.432598818Z" level=info msg="StartContainer for \"f70c715aff65dd0a506e7459328948db65ccbe28d0b09c87557d3d1adf85f6ea\" returns successfully" Nov 4 12:20:16.442859 containerd[1562]: time="2025-11-04T12:20:16.442824165Z" level=info msg="StartContainer for \"e20047aee83323a07fd21f3915f75528877bf62ee21931bb231d991901bd32f2\" returns successfully" Nov 4 12:20:16.835410 kubelet[2324]: I1104 12:20:16.835138 2324 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:20:17.301861 kubelet[2324]: E1104 12:20:17.301446 2324 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:20:17.302786 kubelet[2324]: E1104 12:20:17.302752 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:17.305276 kubelet[2324]: E1104 12:20:17.305255 2324 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:20:17.305373 kubelet[2324]: E1104 12:20:17.305359 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:17.306681 kubelet[2324]: E1104 12:20:17.306605 2324 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:20:17.306834 kubelet[2324]: E1104 12:20:17.306809 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:18.078594 kubelet[2324]: E1104 12:20:18.078547 2324 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 12:20:18.189068 kubelet[2324]: E1104 12:20:18.188972 2324 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1874cd0e3e0ddd7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 12:20:15.263038842 +0000 UTC m=+1.200941119,LastTimestamp:2025-11-04 12:20:15.263038842 +0000 UTC m=+1.200941119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 12:20:18.250511 kubelet[2324]: I1104 12:20:18.250465 2324 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 12:20:18.250511 kubelet[2324]: E1104 12:20:18.250508 2324 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 12:20:18.271506 kubelet[2324]: E1104 12:20:18.271455 2324 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:20:18.307713 kubelet[2324]: I1104 12:20:18.307683 2324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:18.308872 kubelet[2324]: I1104 12:20:18.308849 2324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:18.313175 kubelet[2324]: E1104 12:20:18.313123 2324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:18.313510 kubelet[2324]: E1104 12:20:18.313431 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:18.314210 kubelet[2324]: E1104 12:20:18.314161 2324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:18.314347 kubelet[2324]: E1104 12:20:18.314319 2324 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:18.369610 kubelet[2324]: I1104 12:20:18.368523 2324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:18.371236 kubelet[2324]: E1104 12:20:18.371210 2324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:18.371236 kubelet[2324]: I1104 12:20:18.371237 2324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:18.374969 kubelet[2324]: E1104 12:20:18.374785 2324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:18.374969 kubelet[2324]: I1104 12:20:18.374807 2324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:18.376617 kubelet[2324]: E1104 12:20:18.376589 2324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:19.260175 kubelet[2324]: I1104 12:20:19.260135 2324 apiserver.go:52] "Watching apiserver" Nov 4 12:20:19.268981 kubelet[2324]: I1104 12:20:19.268937 2324 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 12:20:19.983876 systemd[1]: Reload requested from client PID 2614 ('systemctl') (unit session-7.scope)... Nov 4 12:20:19.983892 systemd[1]: Reloading... Nov 4 12:20:20.060223 zram_generator::config[2661]: No configuration found. Nov 4 12:20:20.224425 systemd[1]: Reloading finished in 240 ms. Nov 4 12:20:20.246289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:20:20.257915 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 12:20:20.258177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:20:20.258233 systemd[1]: kubelet.service: Consumed 1.466s CPU time, 123.5M memory peak. Nov 4 12:20:20.259929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:20:20.435398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:20:20.443423 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 12:20:20.485539 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 12:20:20.485539 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:20:20.485829 kubelet[2700]: I1104 12:20:20.485588 2700 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 12:20:20.491029 kubelet[2700]: I1104 12:20:20.490987 2700 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 12:20:20.491029 kubelet[2700]: I1104 12:20:20.491018 2700 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 12:20:20.491121 kubelet[2700]: I1104 12:20:20.491045 2700 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 12:20:20.491121 kubelet[2700]: I1104 12:20:20.491051 2700 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 12:20:20.491377 kubelet[2700]: I1104 12:20:20.491340 2700 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 12:20:20.492531 kubelet[2700]: I1104 12:20:20.492504 2700 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 12:20:20.494619 kubelet[2700]: I1104 12:20:20.494589 2700 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 12:20:20.498630 kubelet[2700]: I1104 12:20:20.498550 2700 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 12:20:20.501313 kubelet[2700]: I1104 12:20:20.501268 2700 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 12:20:20.501503 kubelet[2700]: I1104 12:20:20.501479 2700 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 12:20:20.501644 kubelet[2700]: I1104 12:20:20.501502 2700 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 12:20:20.501644 kubelet[2700]: I1104 12:20:20.501643 2700 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 12:20:20.501734 kubelet[2700]: I1104 12:20:20.501651 2700 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 12:20:20.501734 kubelet[2700]: I1104 12:20:20.501672 2700 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 12:20:20.502543 kubelet[2700]: I1104 12:20:20.502510 2700 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:20:20.502664 kubelet[2700]: I1104 12:20:20.502651 2700 kubelet.go:475] "Attempting to sync node with API server" Nov 4 12:20:20.502691 kubelet[2700]: I1104 12:20:20.502670 2700 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 12:20:20.502717 kubelet[2700]: I1104 12:20:20.502693 2700 kubelet.go:387] "Adding apiserver pod source" Nov 4 12:20:20.502717 kubelet[2700]: I1104 12:20:20.502706 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 12:20:20.503801 kubelet[2700]: I1104 12:20:20.503782 2700 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 12:20:20.504728 kubelet[2700]: I1104 12:20:20.504690 2700 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 12:20:20.504728 kubelet[2700]: I1104 12:20:20.504729 2700 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 12:20:20.506766 kubelet[2700]: I1104 12:20:20.506733 2700 server.go:1262] "Started kubelet" Nov 4 12:20:20.506915 kubelet[2700]: I1104 12:20:20.506866 2700 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 12:20:20.507856 kubelet[2700]: I1104 12:20:20.507740 2700 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 12:20:20.507856 kubelet[2700]: I1104 12:20:20.507807 2700 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 12:20:20.507982 kubelet[2700]: I1104 12:20:20.507892 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 12:20:20.508022 kubelet[2700]: I1104 12:20:20.508003 2700 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 12:20:20.509799 kubelet[2700]: I1104 12:20:20.509471 2700 server.go:310] "Adding debug handlers to kubelet server" Nov 4 12:20:20.510070 kubelet[2700]: I1104 12:20:20.510042 2700 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 12:20:20.511347 kubelet[2700]: E1104 12:20:20.511330 2700 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:20:20.511385 kubelet[2700]: I1104 12:20:20.511357 2700 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 12:20:20.511541 kubelet[2700]: I1104 12:20:20.511522 2700 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 12:20:20.511783 kubelet[2700]: I1104 12:20:20.511758 2700 reconciler.go:29] "Reconciler: start to sync state" Nov 4 12:20:20.515574 kubelet[2700]: I1104 12:20:20.515548 2700 factory.go:223] Registration of the systemd container factory successfully Nov 4 12:20:20.515821 kubelet[2700]: I1104 12:20:20.515788 2700 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 12:20:20.518093 kubelet[2700]: I1104 12:20:20.517234 2700 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 12:20:20.518093 kubelet[2700]: I1104 12:20:20.518026 2700 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 12:20:20.518093 kubelet[2700]: I1104 12:20:20.518041 2700 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 12:20:20.518093 kubelet[2700]: I1104 12:20:20.518058 2700 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 12:20:20.518220 kubelet[2700]: E1104 12:20:20.518117 2700 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 12:20:20.520629 kubelet[2700]: I1104 12:20:20.520609 2700 factory.go:223] Registration of the containerd container factory successfully Nov 4 12:20:20.558759 kubelet[2700]: I1104 12:20:20.558732 2700 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 12:20:20.558759 kubelet[2700]: I1104 12:20:20.558752 2700 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.558773 2700 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.558880 2700 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.558889 2700 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.558904 2700 policy_none.go:49] "None policy: Start" Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.558930 2700 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.558940 2700 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.559024 2700 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 12:20:20.559129 kubelet[2700]: I1104 12:20:20.559032 2700 policy_none.go:47] "Start" Nov 4 12:20:20.563024 kubelet[2700]: E1104 12:20:20.562983 2700 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 12:20:20.563475 kubelet[2700]: I1104 12:20:20.563441 2700 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 12:20:20.563535 kubelet[2700]: I1104 12:20:20.563464 2700 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 12:20:20.563697 kubelet[2700]: I1104 12:20:20.563680 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 12:20:20.564772 kubelet[2700]: E1104 12:20:20.564719 2700 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 12:20:20.620478 kubelet[2700]: I1104 12:20:20.620426 2700 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:20.621682 kubelet[2700]: I1104 12:20:20.621653 2700 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:20.621761 kubelet[2700]: I1104 12:20:20.621702 2700 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:20.668074 kubelet[2700]: I1104 12:20:20.668026 2700 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:20:20.673525 kubelet[2700]: I1104 12:20:20.673501 2700 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 12:20:20.673605 kubelet[2700]: I1104 12:20:20.673566 2700 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 12:20:20.813936 kubelet[2700]: I1104 12:20:20.813752 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:20.813936 kubelet[2700]: I1104 12:20:20.813870 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:20.814671 kubelet[2700]: I1104 12:20:20.813910 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c81b20bfd83bd65029d67ba85a99a38d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c81b20bfd83bd65029d67ba85a99a38d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:20.814671 kubelet[2700]: I1104 12:20:20.813983 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c81b20bfd83bd65029d67ba85a99a38d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c81b20bfd83bd65029d67ba85a99a38d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:20.814671 kubelet[2700]: I1104 12:20:20.814011 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:20.814671 kubelet[2700]: I1104 12:20:20.814037 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c81b20bfd83bd65029d67ba85a99a38d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c81b20bfd83bd65029d67ba85a99a38d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:20.814671 kubelet[2700]: I1104 12:20:20.814059 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:20.814861 kubelet[2700]: I1104 12:20:20.814143 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:20.814861 kubelet[2700]: I1104 12:20:20.814223 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:20.928176 kubelet[2700]: E1104 12:20:20.928132 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:20.928305 kubelet[2700]: E1104 12:20:20.928132 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:20.928784 kubelet[2700]: E1104 12:20:20.928690 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:21.503847 kubelet[2700]: I1104 12:20:21.503810 2700 apiserver.go:52] "Watching apiserver" Nov 4 12:20:21.547750 kubelet[2700]: I1104 12:20:21.547674 2700 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:21.550860 kubelet[2700]: I1104 12:20:21.548167 2700 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:21.550860 kubelet[2700]: I1104 12:20:21.548250 2700 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:21.553702 kubelet[2700]: E1104 12:20:21.553408 2700 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 12:20:21.553970 kubelet[2700]: E1104 12:20:21.553934 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:21.554413 kubelet[2700]: E1104 12:20:21.554333 2700 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 12:20:21.554616 kubelet[2700]: E1104 12:20:21.554587 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:21.556363 kubelet[2700]: E1104 12:20:21.556334 2700 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:20:21.556522 kubelet[2700]: E1104 12:20:21.556498 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:21.575869 kubelet[2700]: I1104 12:20:21.575803 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.575769024 podStartE2EDuration="1.575769024s" podCreationTimestamp="2025-11-04 12:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:20:21.573544922 +0000 UTC m=+1.127053822" watchObservedRunningTime="2025-11-04 12:20:21.575769024 +0000 UTC m=+1.129277884" Nov 4 12:20:21.587616 kubelet[2700]: I1104 12:20:21.587484 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5874700480000001 podStartE2EDuration="1.587470048s" podCreationTimestamp="2025-11-04 12:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:20:21.580789906 +0000 UTC m=+1.134298726" watchObservedRunningTime="2025-11-04 12:20:21.587470048 +0000 UTC m=+1.140978908" Nov 4 12:20:21.596471 kubelet[2700]: I1104 12:20:21.596356 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.596342306 podStartE2EDuration="1.596342306s" podCreationTimestamp="2025-11-04 12:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:20:21.587998066 +0000 UTC m=+1.141506926" watchObservedRunningTime="2025-11-04 12:20:21.596342306 +0000 UTC m=+1.149851166" Nov 4 12:20:21.612095 kubelet[2700]: I1104 12:20:21.612042 2700 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 12:20:22.549254 kubelet[2700]: E1104 12:20:22.548889 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:22.549254 kubelet[2700]: E1104 12:20:22.549050 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:22.549871 kubelet[2700]: E1104 12:20:22.549846 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:23.550805 kubelet[2700]: E1104 12:20:23.550742 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:27.234534 kubelet[2700]: I1104 12:20:27.234377 2700 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 12:20:27.234838 containerd[1562]: time="2025-11-04T12:20:27.234747658Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 12:20:27.235029 kubelet[2700]: I1104 12:20:27.234905 2700 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 12:20:27.713028 kubelet[2700]: E1104 12:20:27.712982 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:28.393126 systemd[1]: Created slice kubepods-besteffort-podb27b4c35_9cbe_433b_8bd7_80aad8b7b388.slice - libcontainer container kubepods-besteffort-podb27b4c35_9cbe_433b_8bd7_80aad8b7b388.slice. Nov 4 12:20:28.462581 kubelet[2700]: I1104 12:20:28.462370 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b27b4c35-9cbe-433b-8bd7-80aad8b7b388-kube-proxy\") pod \"kube-proxy-mvfqb\" (UID: \"b27b4c35-9cbe-433b-8bd7-80aad8b7b388\") " pod="kube-system/kube-proxy-mvfqb" Nov 4 12:20:28.462581 kubelet[2700]: I1104 12:20:28.462407 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b27b4c35-9cbe-433b-8bd7-80aad8b7b388-xtables-lock\") pod \"kube-proxy-mvfqb\" (UID: \"b27b4c35-9cbe-433b-8bd7-80aad8b7b388\") " pod="kube-system/kube-proxy-mvfqb" Nov 4 12:20:28.462581 kubelet[2700]: I1104 12:20:28.462424 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b27b4c35-9cbe-433b-8bd7-80aad8b7b388-lib-modules\") pod \"kube-proxy-mvfqb\" (UID: \"b27b4c35-9cbe-433b-8bd7-80aad8b7b388\") " pod="kube-system/kube-proxy-mvfqb" Nov 4 12:20:28.462581 kubelet[2700]: I1104 12:20:28.462439 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g468d\" (UniqueName: \"kubernetes.io/projected/b27b4c35-9cbe-433b-8bd7-80aad8b7b388-kube-api-access-g468d\") pod \"kube-proxy-mvfqb\" (UID: \"b27b4c35-9cbe-433b-8bd7-80aad8b7b388\") " pod="kube-system/kube-proxy-mvfqb" Nov 4 12:20:28.497352 systemd[1]: Created slice kubepods-besteffort-podb8ecec32_a317_4e58_ae1d_5a7bf4db9b8e.slice - libcontainer container kubepods-besteffort-podb8ecec32_a317_4e58_ae1d_5a7bf4db9b8e.slice. Nov 4 12:20:28.558922 kubelet[2700]: E1104 12:20:28.558774 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:28.563072 kubelet[2700]: I1104 12:20:28.563009 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcxsq\" (UniqueName: \"kubernetes.io/projected/b8ecec32-a317-4e58-ae1d-5a7bf4db9b8e-kube-api-access-mcxsq\") pod \"tigera-operator-65cdcdfd6d-r8fzn\" (UID: \"b8ecec32-a317-4e58-ae1d-5a7bf4db9b8e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-r8fzn" Nov 4 12:20:28.563308 kubelet[2700]: I1104 12:20:28.563251 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8ecec32-a317-4e58-ae1d-5a7bf4db9b8e-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-r8fzn\" (UID: \"b8ecec32-a317-4e58-ae1d-5a7bf4db9b8e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-r8fzn" Nov 4 12:20:28.708437 kubelet[2700]: E1104 12:20:28.708332 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:28.710590 containerd[1562]: time="2025-11-04T12:20:28.710553074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mvfqb,Uid:b27b4c35-9cbe-433b-8bd7-80aad8b7b388,Namespace:kube-system,Attempt:0,}" Nov 4 12:20:28.725547 containerd[1562]: time="2025-11-04T12:20:28.725504647Z" level=info msg="connecting to shim d8352e6dd9da127e267f087164dc21708365ad67f39afc93f28a5a55a9406a56" address="unix:///run/containerd/s/ef84396ed10cb2fadb2bb8f8e5b53f3c33c2f2daca26fdc19a96b99119e0133e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:28.754255 systemd[1]: Started cri-containerd-d8352e6dd9da127e267f087164dc21708365ad67f39afc93f28a5a55a9406a56.scope - libcontainer container d8352e6dd9da127e267f087164dc21708365ad67f39afc93f28a5a55a9406a56. Nov 4 12:20:28.775669 containerd[1562]: time="2025-11-04T12:20:28.775632539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mvfqb,Uid:b27b4c35-9cbe-433b-8bd7-80aad8b7b388,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8352e6dd9da127e267f087164dc21708365ad67f39afc93f28a5a55a9406a56\"" Nov 4 12:20:28.776317 kubelet[2700]: E1104 12:20:28.776292 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:28.781437 containerd[1562]: time="2025-11-04T12:20:28.781390787Z" level=info msg="CreateContainer within sandbox \"d8352e6dd9da127e267f087164dc21708365ad67f39afc93f28a5a55a9406a56\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 12:20:28.791212 containerd[1562]: time="2025-11-04T12:20:28.791170625Z" level=info msg="Container 01d1a33ec015f7c4f04451a744604329ad4f6bbcacbcf6a2a6e6e1f31b2a8063: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:28.799257 containerd[1562]: time="2025-11-04T12:20:28.799218684Z" level=info msg="CreateContainer within sandbox \"d8352e6dd9da127e267f087164dc21708365ad67f39afc93f28a5a55a9406a56\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01d1a33ec015f7c4f04451a744604329ad4f6bbcacbcf6a2a6e6e1f31b2a8063\"" Nov 4 12:20:28.800099 containerd[1562]: time="2025-11-04T12:20:28.800022354Z" level=info msg="StartContainer for \"01d1a33ec015f7c4f04451a744604329ad4f6bbcacbcf6a2a6e6e1f31b2a8063\"" Nov 4 12:20:28.801275 containerd[1562]: time="2025-11-04T12:20:28.801242979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-r8fzn,Uid:b8ecec32-a317-4e58-ae1d-5a7bf4db9b8e,Namespace:tigera-operator,Attempt:0,}" Nov 4 12:20:28.802197 containerd[1562]: time="2025-11-04T12:20:28.802169287Z" level=info msg="connecting to shim 01d1a33ec015f7c4f04451a744604329ad4f6bbcacbcf6a2a6e6e1f31b2a8063" address="unix:///run/containerd/s/ef84396ed10cb2fadb2bb8f8e5b53f3c33c2f2daca26fdc19a96b99119e0133e" protocol=ttrpc version=3 Nov 4 12:20:28.822790 containerd[1562]: time="2025-11-04T12:20:28.822748869Z" level=info msg="connecting to shim 3d680b21bb178d261d976e703c286b2af7c006aad66e479120cc763ef5e4e675" address="unix:///run/containerd/s/7e76346c3ab80714de7e89bc2f76e66339fc4b851fc75138c990951185500b9a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:28.825392 systemd[1]: Started cri-containerd-01d1a33ec015f7c4f04451a744604329ad4f6bbcacbcf6a2a6e6e1f31b2a8063.scope - libcontainer container 01d1a33ec015f7c4f04451a744604329ad4f6bbcacbcf6a2a6e6e1f31b2a8063. Nov 4 12:20:28.846251 systemd[1]: Started cri-containerd-3d680b21bb178d261d976e703c286b2af7c006aad66e479120cc763ef5e4e675.scope - libcontainer container 3d680b21bb178d261d976e703c286b2af7c006aad66e479120cc763ef5e4e675. Nov 4 12:20:28.868214 containerd[1562]: time="2025-11-04T12:20:28.868168101Z" level=info msg="StartContainer for \"01d1a33ec015f7c4f04451a744604329ad4f6bbcacbcf6a2a6e6e1f31b2a8063\" returns successfully" Nov 4 12:20:28.883684 containerd[1562]: time="2025-11-04T12:20:28.883638947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-r8fzn,Uid:b8ecec32-a317-4e58-ae1d-5a7bf4db9b8e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3d680b21bb178d261d976e703c286b2af7c006aad66e479120cc763ef5e4e675\"" Nov 4 12:20:28.885914 containerd[1562]: time="2025-11-04T12:20:28.885883799Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 12:20:29.562550 kubelet[2700]: E1104 12:20:29.562519 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:29.729596 kubelet[2700]: E1104 12:20:29.729548 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:29.744615 kubelet[2700]: I1104 12:20:29.744548 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mvfqb" podStartSLOduration=1.7445313919999998 podStartE2EDuration="1.744531392s" podCreationTimestamp="2025-11-04 12:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:20:29.572020395 +0000 UTC m=+9.125529295" watchObservedRunningTime="2025-11-04 12:20:29.744531392 +0000 UTC m=+9.298040252" Nov 4 12:20:30.353208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494697227.mount: Deactivated successfully. Nov 4 12:20:30.564645 kubelet[2700]: E1104 12:20:30.564604 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:30.884744 containerd[1562]: time="2025-11-04T12:20:30.884357809Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:30.885147 containerd[1562]: time="2025-11-04T12:20:30.885120960Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 4 12:20:30.885963 containerd[1562]: time="2025-11-04T12:20:30.885937191Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:30.888620 containerd[1562]: time="2025-11-04T12:20:30.888140046Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:30.896187 containerd[1562]: time="2025-11-04T12:20:30.896148277Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.010223078s" Nov 4 12:20:30.896313 containerd[1562]: time="2025-11-04T12:20:30.896293955Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 4 12:20:30.905860 containerd[1562]: time="2025-11-04T12:20:30.905826208Z" level=info msg="CreateContainer within sandbox \"3d680b21bb178d261d976e703c286b2af7c006aad66e479120cc763ef5e4e675\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 12:20:30.944773 containerd[1562]: time="2025-11-04T12:20:30.944009820Z" level=info msg="Container d83d03db2ffd8efdbc56578ed35c28df174758b88a5826a6d8fa8be3628a92d6: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:30.944580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457334241.mount: Deactivated successfully. Nov 4 12:20:30.951548 containerd[1562]: time="2025-11-04T12:20:30.951392577Z" level=info msg="CreateContainer within sandbox \"3d680b21bb178d261d976e703c286b2af7c006aad66e479120cc763ef5e4e675\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d83d03db2ffd8efdbc56578ed35c28df174758b88a5826a6d8fa8be3628a92d6\"" Nov 4 12:20:30.952585 containerd[1562]: time="2025-11-04T12:20:30.952554604Z" level=info msg="StartContainer for \"d83d03db2ffd8efdbc56578ed35c28df174758b88a5826a6d8fa8be3628a92d6\"" Nov 4 12:20:30.953629 containerd[1562]: time="2025-11-04T12:20:30.953602232Z" level=info msg="connecting to shim d83d03db2ffd8efdbc56578ed35c28df174758b88a5826a6d8fa8be3628a92d6" address="unix:///run/containerd/s/7e76346c3ab80714de7e89bc2f76e66339fc4b851fc75138c990951185500b9a" protocol=ttrpc version=3 Nov 4 12:20:31.004312 systemd[1]: Started cri-containerd-d83d03db2ffd8efdbc56578ed35c28df174758b88a5826a6d8fa8be3628a92d6.scope - libcontainer container d83d03db2ffd8efdbc56578ed35c28df174758b88a5826a6d8fa8be3628a92d6. Nov 4 12:20:31.032021 containerd[1562]: time="2025-11-04T12:20:31.031970932Z" level=info msg="StartContainer for \"d83d03db2ffd8efdbc56578ed35c28df174758b88a5826a6d8fa8be3628a92d6\" returns successfully" Nov 4 12:20:31.567588 kubelet[2700]: E1104 12:20:31.567548 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:31.577653 kubelet[2700]: I1104 12:20:31.577602 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-r8fzn" podStartSLOduration=1.565818997 podStartE2EDuration="3.577588297s" podCreationTimestamp="2025-11-04 12:20:28 +0000 UTC" firstStartedPulling="2025-11-04 12:20:28.885386805 +0000 UTC m=+8.438895665" lastFinishedPulling="2025-11-04 12:20:30.897156105 +0000 UTC m=+10.450664965" observedRunningTime="2025-11-04 12:20:31.577194902 +0000 UTC m=+11.130703762" watchObservedRunningTime="2025-11-04 12:20:31.577588297 +0000 UTC m=+11.131097157" Nov 4 12:20:32.462827 kubelet[2700]: E1104 12:20:32.462616 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:36.350646 sudo[1775]: pam_unix(sudo:session): session closed for user root Nov 4 12:20:36.352937 sshd[1774]: Connection closed by 10.0.0.1 port 46370 Nov 4 12:20:36.353501 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Nov 4 12:20:36.357313 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:46370.service: Deactivated successfully. Nov 4 12:20:36.359760 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 12:20:36.360108 systemd[1]: session-7.scope: Consumed 6.687s CPU time, 211.3M memory peak. Nov 4 12:20:36.361490 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Nov 4 12:20:36.363918 systemd-logind[1542]: Removed session 7. Nov 4 12:20:36.869193 update_engine[1544]: I20251104 12:20:36.869109 1544 update_attempter.cc:509] Updating boot flags... Nov 4 12:20:43.915946 systemd[1]: Created slice kubepods-besteffort-pod6ca4d886_1168_4605_b5bf_936666e70163.slice - libcontainer container kubepods-besteffort-pod6ca4d886_1168_4605_b5bf_936666e70163.slice. Nov 4 12:20:43.955347 kubelet[2700]: I1104 12:20:43.955295 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca4d886-1168-4605-b5bf-936666e70163-tigera-ca-bundle\") pod \"calico-typha-849f9b664-l2zmx\" (UID: \"6ca4d886-1168-4605-b5bf-936666e70163\") " pod="calico-system/calico-typha-849f9b664-l2zmx" Nov 4 12:20:43.955347 kubelet[2700]: I1104 12:20:43.955345 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrbf\" (UniqueName: \"kubernetes.io/projected/6ca4d886-1168-4605-b5bf-936666e70163-kube-api-access-nfrbf\") pod \"calico-typha-849f9b664-l2zmx\" (UID: \"6ca4d886-1168-4605-b5bf-936666e70163\") " pod="calico-system/calico-typha-849f9b664-l2zmx" Nov 4 12:20:43.955746 kubelet[2700]: I1104 12:20:43.955364 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6ca4d886-1168-4605-b5bf-936666e70163-typha-certs\") pod \"calico-typha-849f9b664-l2zmx\" (UID: \"6ca4d886-1168-4605-b5bf-936666e70163\") " pod="calico-system/calico-typha-849f9b664-l2zmx" Nov 4 12:20:44.109238 systemd[1]: Created slice kubepods-besteffort-poda1f47bfc_2ac3_4a20_811e_65757211059e.slice - libcontainer container kubepods-besteffort-poda1f47bfc_2ac3_4a20_811e_65757211059e.slice. Nov 4 12:20:44.156625 kubelet[2700]: I1104 12:20:44.156582 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-policysync\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156625 kubelet[2700]: I1104 12:20:44.156632 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk88z\" (UniqueName: \"kubernetes.io/projected/a1f47bfc-2ac3-4a20-811e-65757211059e-kube-api-access-bk88z\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156796 kubelet[2700]: I1104 12:20:44.156651 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-cni-bin-dir\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156796 kubelet[2700]: I1104 12:20:44.156666 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-lib-modules\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156796 kubelet[2700]: I1104 12:20:44.156695 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f47bfc-2ac3-4a20-811e-65757211059e-tigera-ca-bundle\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156796 kubelet[2700]: I1104 12:20:44.156708 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-xtables-lock\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156796 kubelet[2700]: I1104 12:20:44.156723 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a1f47bfc-2ac3-4a20-811e-65757211059e-node-certs\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156906 kubelet[2700]: I1104 12:20:44.156737 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-cni-log-dir\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156906 kubelet[2700]: I1104 12:20:44.156757 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-flexvol-driver-host\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156906 kubelet[2700]: I1104 12:20:44.156773 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-var-run-calico\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156906 kubelet[2700]: I1104 12:20:44.156790 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-var-lib-calico\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.156906 kubelet[2700]: I1104 12:20:44.156810 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a1f47bfc-2ac3-4a20-811e-65757211059e-cni-net-dir\") pod \"calico-node-db9pn\" (UID: \"a1f47bfc-2ac3-4a20-811e-65757211059e\") " pod="calico-system/calico-node-db9pn" Nov 4 12:20:44.225469 kubelet[2700]: E1104 12:20:44.224890 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:44.226385 containerd[1562]: time="2025-11-04T12:20:44.226341843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849f9b664-l2zmx,Uid:6ca4d886-1168-4605-b5bf-936666e70163,Namespace:calico-system,Attempt:0,}" Nov 4 12:20:44.268415 kubelet[2700]: E1104 12:20:44.267768 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.269144 kubelet[2700]: W1104 12:20:44.268983 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.269144 kubelet[2700]: E1104 12:20:44.269026 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.280562 kubelet[2700]: E1104 12:20:44.280519 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.280562 kubelet[2700]: W1104 12:20:44.280543 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.280674 kubelet[2700]: E1104 12:20:44.280580 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.283807 containerd[1562]: time="2025-11-04T12:20:44.283690283Z" level=info msg="connecting to shim 5bba698f8aca64e9933af9b4a24ebf2cf5deef682cf5c2764da55068ffedeb5e" address="unix:///run/containerd/s/f80b93da04b8dddd4793c07e00d786af6e505a80a420ea74de2db375d35ebc19" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:44.331037 kubelet[2700]: E1104 12:20:44.330970 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:20:44.334355 systemd[1]: Started cri-containerd-5bba698f8aca64e9933af9b4a24ebf2cf5deef682cf5c2764da55068ffedeb5e.scope - libcontainer container 5bba698f8aca64e9933af9b4a24ebf2cf5deef682cf5c2764da55068ffedeb5e. Nov 4 12:20:44.352950 kubelet[2700]: E1104 12:20:44.352922 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.353650 kubelet[2700]: W1104 12:20:44.353621 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.353710 kubelet[2700]: E1104 12:20:44.353656 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.353866 kubelet[2700]: E1104 12:20:44.353848 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.353900 kubelet[2700]: W1104 12:20:44.353859 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.353923 kubelet[2700]: E1104 12:20:44.353899 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.354043 kubelet[2700]: E1104 12:20:44.354028 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.354043 kubelet[2700]: W1104 12:20:44.354038 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.354166 kubelet[2700]: E1104 12:20:44.354047 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.354230 kubelet[2700]: E1104 12:20:44.354213 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.354258 kubelet[2700]: W1104 12:20:44.354238 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.354258 kubelet[2700]: E1104 12:20:44.354247 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.354408 kubelet[2700]: E1104 12:20:44.354391 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.354408 kubelet[2700]: W1104 12:20:44.354403 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.354465 kubelet[2700]: E1104 12:20:44.354411 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.354544 kubelet[2700]: E1104 12:20:44.354529 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.354544 kubelet[2700]: W1104 12:20:44.354538 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.354593 kubelet[2700]: E1104 12:20:44.354546 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.354673 kubelet[2700]: E1104 12:20:44.354658 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.354673 kubelet[2700]: W1104 12:20:44.354668 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.354728 kubelet[2700]: E1104 12:20:44.354675 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.354839 kubelet[2700]: E1104 12:20:44.354822 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.354839 kubelet[2700]: W1104 12:20:44.354834 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.354890 kubelet[2700]: E1104 12:20:44.354843 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.354998 kubelet[2700]: E1104 12:20:44.354982 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.354998 kubelet[2700]: W1104 12:20:44.354995 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355054 kubelet[2700]: E1104 12:20:44.355004 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.355148 kubelet[2700]: E1104 12:20:44.355134 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.355148 kubelet[2700]: W1104 12:20:44.355144 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355217 kubelet[2700]: E1104 12:20:44.355151 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.355293 kubelet[2700]: E1104 12:20:44.355278 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.355293 kubelet[2700]: W1104 12:20:44.355288 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355350 kubelet[2700]: E1104 12:20:44.355295 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.355432 kubelet[2700]: E1104 12:20:44.355416 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.355432 kubelet[2700]: W1104 12:20:44.355426 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355432 kubelet[2700]: E1104 12:20:44.355434 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.355573 kubelet[2700]: E1104 12:20:44.355557 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.355573 kubelet[2700]: W1104 12:20:44.355567 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355573 kubelet[2700]: E1104 12:20:44.355574 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.355704 kubelet[2700]: E1104 12:20:44.355690 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.355704 kubelet[2700]: W1104 12:20:44.355698 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355704 kubelet[2700]: E1104 12:20:44.355705 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.355835 kubelet[2700]: E1104 12:20:44.355820 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.355835 kubelet[2700]: W1104 12:20:44.355829 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355835 kubelet[2700]: E1104 12:20:44.355836 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.355968 kubelet[2700]: E1104 12:20:44.355953 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.355968 kubelet[2700]: W1104 12:20:44.355962 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.355968 kubelet[2700]: E1104 12:20:44.355969 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.356146 kubelet[2700]: E1104 12:20:44.356104 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.356146 kubelet[2700]: W1104 12:20:44.356115 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.356146 kubelet[2700]: E1104 12:20:44.356122 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.357307 kubelet[2700]: E1104 12:20:44.356241 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.357307 kubelet[2700]: W1104 12:20:44.356251 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.357307 kubelet[2700]: E1104 12:20:44.356258 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.357307 kubelet[2700]: E1104 12:20:44.356368 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.357307 kubelet[2700]: W1104 12:20:44.356374 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.357307 kubelet[2700]: E1104 12:20:44.356381 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.357307 kubelet[2700]: E1104 12:20:44.356493 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.357307 kubelet[2700]: W1104 12:20:44.356499 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.357307 kubelet[2700]: E1104 12:20:44.356507 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.358861 kubelet[2700]: E1104 12:20:44.358838 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.358861 kubelet[2700]: W1104 12:20:44.358857 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.358952 kubelet[2700]: E1104 12:20:44.358870 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.358952 kubelet[2700]: I1104 12:20:44.358913 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5213a2cb-c20a-4f3b-8d44-0dd43d58dc01-kubelet-dir\") pod \"csi-node-driver-mvvz6\" (UID: \"5213a2cb-c20a-4f3b-8d44-0dd43d58dc01\") " pod="calico-system/csi-node-driver-mvvz6" Nov 4 12:20:44.359120 kubelet[2700]: E1104 12:20:44.359103 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.359120 kubelet[2700]: W1104 12:20:44.359117 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.359176 kubelet[2700]: E1104 12:20:44.359128 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.359176 kubelet[2700]: I1104 12:20:44.359152 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5213a2cb-c20a-4f3b-8d44-0dd43d58dc01-varrun\") pod \"csi-node-driver-mvvz6\" (UID: \"5213a2cb-c20a-4f3b-8d44-0dd43d58dc01\") " pod="calico-system/csi-node-driver-mvvz6" Nov 4 12:20:44.359387 kubelet[2700]: E1104 12:20:44.359365 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.359410 kubelet[2700]: W1104 12:20:44.359386 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.359410 kubelet[2700]: E1104 12:20:44.359398 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.359945 kubelet[2700]: E1104 12:20:44.359894 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.359945 kubelet[2700]: W1104 12:20:44.359911 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.359945 kubelet[2700]: E1104 12:20:44.359922 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.360227 kubelet[2700]: E1104 12:20:44.360204 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.360227 kubelet[2700]: W1104 12:20:44.360220 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.360337 kubelet[2700]: E1104 12:20:44.360312 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.360368 kubelet[2700]: I1104 12:20:44.360346 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9cs6\" (UniqueName: \"kubernetes.io/projected/5213a2cb-c20a-4f3b-8d44-0dd43d58dc01-kube-api-access-k9cs6\") pod \"csi-node-driver-mvvz6\" (UID: \"5213a2cb-c20a-4f3b-8d44-0dd43d58dc01\") " pod="calico-system/csi-node-driver-mvvz6" Nov 4 12:20:44.360580 kubelet[2700]: E1104 12:20:44.360560 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.360692 kubelet[2700]: W1104 12:20:44.360578 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.360730 kubelet[2700]: E1104 12:20:44.360692 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.360929 kubelet[2700]: E1104 12:20:44.360913 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.360929 kubelet[2700]: W1104 12:20:44.360928 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.360990 kubelet[2700]: E1104 12:20:44.360940 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.361240 kubelet[2700]: E1104 12:20:44.361223 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.361295 kubelet[2700]: W1104 12:20:44.361240 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.361295 kubelet[2700]: E1104 12:20:44.361252 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.361295 kubelet[2700]: I1104 12:20:44.361285 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5213a2cb-c20a-4f3b-8d44-0dd43d58dc01-registration-dir\") pod \"csi-node-driver-mvvz6\" (UID: \"5213a2cb-c20a-4f3b-8d44-0dd43d58dc01\") " pod="calico-system/csi-node-driver-mvvz6" Nov 4 12:20:44.361501 kubelet[2700]: E1104 12:20:44.361485 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.361501 kubelet[2700]: W1104 12:20:44.361499 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.361559 kubelet[2700]: E1104 12:20:44.361516 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.361559 kubelet[2700]: I1104 12:20:44.361535 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5213a2cb-c20a-4f3b-8d44-0dd43d58dc01-socket-dir\") pod \"csi-node-driver-mvvz6\" (UID: \"5213a2cb-c20a-4f3b-8d44-0dd43d58dc01\") " pod="calico-system/csi-node-driver-mvvz6" Nov 4 12:20:44.361871 kubelet[2700]: E1104 12:20:44.361855 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.361897 kubelet[2700]: W1104 12:20:44.361871 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.361897 kubelet[2700]: E1104 12:20:44.361895 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.362110 kubelet[2700]: E1104 12:20:44.362096 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.362110 kubelet[2700]: W1104 12:20:44.362109 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.362159 kubelet[2700]: E1104 12:20:44.362119 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.362402 kubelet[2700]: E1104 12:20:44.362386 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.362433 kubelet[2700]: W1104 12:20:44.362412 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.362433 kubelet[2700]: E1104 12:20:44.362423 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.362623 kubelet[2700]: E1104 12:20:44.362610 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.362623 kubelet[2700]: W1104 12:20:44.362623 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.362676 kubelet[2700]: E1104 12:20:44.362632 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.362931 kubelet[2700]: E1104 12:20:44.362914 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.362931 kubelet[2700]: W1104 12:20:44.362928 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.362994 kubelet[2700]: E1104 12:20:44.362938 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.363437 kubelet[2700]: E1104 12:20:44.363419 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.363437 kubelet[2700]: W1104 12:20:44.363437 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.363504 kubelet[2700]: E1104 12:20:44.363449 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.387780 containerd[1562]: time="2025-11-04T12:20:44.387734383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849f9b664-l2zmx,Uid:6ca4d886-1168-4605-b5bf-936666e70163,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bba698f8aca64e9933af9b4a24ebf2cf5deef682cf5c2764da55068ffedeb5e\"" Nov 4 12:20:44.388500 kubelet[2700]: E1104 12:20:44.388451 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:44.390958 containerd[1562]: time="2025-11-04T12:20:44.390928845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 12:20:44.413073 kubelet[2700]: E1104 12:20:44.413038 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:44.414668 containerd[1562]: time="2025-11-04T12:20:44.414614393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-db9pn,Uid:a1f47bfc-2ac3-4a20-811e-65757211059e,Namespace:calico-system,Attempt:0,}" Nov 4 12:20:44.448814 containerd[1562]: time="2025-11-04T12:20:44.448754242Z" level=info msg="connecting to shim f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715" address="unix:///run/containerd/s/80fad01f35df285b3756d53c8e426ed92c55a48a65c9b2a1cbffb485482afbc7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:44.462888 kubelet[2700]: E1104 12:20:44.462862 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.462888 kubelet[2700]: W1104 12:20:44.462882 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.463005 kubelet[2700]: E1104 12:20:44.462903 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.463167 kubelet[2700]: E1104 12:20:44.463139 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.463167 kubelet[2700]: W1104 12:20:44.463148 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.463167 kubelet[2700]: E1104 12:20:44.463158 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.463771 kubelet[2700]: E1104 12:20:44.463755 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.463974 kubelet[2700]: W1104 12:20:44.463819 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.463974 kubelet[2700]: E1104 12:20:44.463835 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.464645 kubelet[2700]: E1104 12:20:44.464352 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.464645 kubelet[2700]: W1104 12:20:44.464573 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.464645 kubelet[2700]: E1104 12:20:44.464592 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.465119 kubelet[2700]: E1104 12:20:44.465096 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.465218 kubelet[2700]: W1104 12:20:44.465179 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.465439 kubelet[2700]: E1104 12:20:44.465263 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.466029 kubelet[2700]: E1104 12:20:44.465880 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.466029 kubelet[2700]: W1104 12:20:44.465896 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.466029 kubelet[2700]: E1104 12:20:44.465908 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.466337 kubelet[2700]: E1104 12:20:44.466321 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.466466 kubelet[2700]: W1104 12:20:44.466399 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.466562 kubelet[2700]: E1104 12:20:44.466542 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.467303 kubelet[2700]: E1104 12:20:44.467283 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.467499 kubelet[2700]: W1104 12:20:44.467377 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.467499 kubelet[2700]: E1104 12:20:44.467395 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.467670 kubelet[2700]: E1104 12:20:44.467645 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.468179 kubelet[2700]: W1104 12:20:44.468160 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.468321 kubelet[2700]: E1104 12:20:44.468301 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.468795 kubelet[2700]: E1104 12:20:44.468777 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.469110 kubelet[2700]: W1104 12:20:44.468982 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.469110 kubelet[2700]: E1104 12:20:44.469006 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.469440 kubelet[2700]: E1104 12:20:44.469327 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.469440 kubelet[2700]: W1104 12:20:44.469343 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.469440 kubelet[2700]: E1104 12:20:44.469354 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.470045 kubelet[2700]: E1104 12:20:44.469792 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.470045 kubelet[2700]: W1104 12:20:44.469822 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.470045 kubelet[2700]: E1104 12:20:44.469836 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.471263 kubelet[2700]: E1104 12:20:44.471021 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.471263 kubelet[2700]: W1104 12:20:44.471039 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.471263 kubelet[2700]: E1104 12:20:44.471053 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.471705 kubelet[2700]: E1104 12:20:44.471573 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.471705 kubelet[2700]: W1104 12:20:44.471589 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.471705 kubelet[2700]: E1104 12:20:44.471602 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.472002 kubelet[2700]: E1104 12:20:44.471866 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.472002 kubelet[2700]: W1104 12:20:44.471877 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.472002 kubelet[2700]: E1104 12:20:44.471888 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.472315 kubelet[2700]: E1104 12:20:44.472199 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.472315 kubelet[2700]: W1104 12:20:44.472214 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.472315 kubelet[2700]: E1104 12:20:44.472224 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.472473 kubelet[2700]: E1104 12:20:44.472461 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.472546 kubelet[2700]: W1104 12:20:44.472534 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.472628 kubelet[2700]: E1104 12:20:44.472616 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.474127 kubelet[2700]: E1104 12:20:44.472837 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.473253 systemd[1]: Started cri-containerd-f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715.scope - libcontainer container f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715. Nov 4 12:20:44.474265 kubelet[2700]: W1104 12:20:44.474124 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.474265 kubelet[2700]: E1104 12:20:44.474146 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.474356 kubelet[2700]: E1104 12:20:44.474325 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.474356 kubelet[2700]: W1104 12:20:44.474334 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.474356 kubelet[2700]: E1104 12:20:44.474343 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.475006 kubelet[2700]: E1104 12:20:44.474551 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.475006 kubelet[2700]: W1104 12:20:44.474562 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.475006 kubelet[2700]: E1104 12:20:44.474572 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.475006 kubelet[2700]: E1104 12:20:44.474716 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.475006 kubelet[2700]: W1104 12:20:44.474727 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.475006 kubelet[2700]: E1104 12:20:44.474735 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.475006 kubelet[2700]: E1104 12:20:44.474937 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.475006 kubelet[2700]: W1104 12:20:44.474946 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.475006 kubelet[2700]: E1104 12:20:44.474971 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.475352 kubelet[2700]: E1104 12:20:44.475328 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.475352 kubelet[2700]: W1104 12:20:44.475343 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.475405 kubelet[2700]: E1104 12:20:44.475354 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.475910 kubelet[2700]: E1104 12:20:44.475528 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.475910 kubelet[2700]: W1104 12:20:44.475543 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.475910 kubelet[2700]: E1104 12:20:44.475553 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.476282 kubelet[2700]: E1104 12:20:44.476001 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.476282 kubelet[2700]: W1104 12:20:44.476016 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.476282 kubelet[2700]: E1104 12:20:44.476028 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.491758 kubelet[2700]: E1104 12:20:44.491712 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:44.491758 kubelet[2700]: W1104 12:20:44.491755 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:44.491868 kubelet[2700]: E1104 12:20:44.491775 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:44.500060 containerd[1562]: time="2025-11-04T12:20:44.500020476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-db9pn,Uid:a1f47bfc-2ac3-4a20-811e-65757211059e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715\"" Nov 4 12:20:44.500839 kubelet[2700]: E1104 12:20:44.500818 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:45.413307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725982799.mount: Deactivated successfully. Nov 4 12:20:45.947876 containerd[1562]: time="2025-11-04T12:20:45.947817105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:45.948749 containerd[1562]: time="2025-11-04T12:20:45.948279662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 4 12:20:45.949500 containerd[1562]: time="2025-11-04T12:20:45.949463376Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:45.951307 containerd[1562]: time="2025-11-04T12:20:45.951279326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:45.952048 containerd[1562]: time="2025-11-04T12:20:45.951727844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.560765239s" Nov 4 12:20:45.952048 containerd[1562]: time="2025-11-04T12:20:45.951751244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 4 12:20:45.952931 containerd[1562]: time="2025-11-04T12:20:45.952827558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 12:20:45.965346 containerd[1562]: time="2025-11-04T12:20:45.965303251Z" level=info msg="CreateContainer within sandbox \"5bba698f8aca64e9933af9b4a24ebf2cf5deef682cf5c2764da55068ffedeb5e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 12:20:45.971814 containerd[1562]: time="2025-11-04T12:20:45.971782777Z" level=info msg="Container 96b1f5fff7d1c42f1ebf6f72ae65a1b4bafc862222b3f02cb5d026e08ff9949b: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:45.977760 containerd[1562]: time="2025-11-04T12:20:45.977717705Z" level=info msg="CreateContainer within sandbox \"5bba698f8aca64e9933af9b4a24ebf2cf5deef682cf5c2764da55068ffedeb5e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"96b1f5fff7d1c42f1ebf6f72ae65a1b4bafc862222b3f02cb5d026e08ff9949b\"" Nov 4 12:20:45.978477 containerd[1562]: time="2025-11-04T12:20:45.978452821Z" level=info msg="StartContainer for \"96b1f5fff7d1c42f1ebf6f72ae65a1b4bafc862222b3f02cb5d026e08ff9949b\"" Nov 4 12:20:45.979698 containerd[1562]: time="2025-11-04T12:20:45.979670015Z" level=info msg="connecting to shim 96b1f5fff7d1c42f1ebf6f72ae65a1b4bafc862222b3f02cb5d026e08ff9949b" address="unix:///run/containerd/s/f80b93da04b8dddd4793c07e00d786af6e505a80a420ea74de2db375d35ebc19" protocol=ttrpc version=3 Nov 4 12:20:45.997262 systemd[1]: Started cri-containerd-96b1f5fff7d1c42f1ebf6f72ae65a1b4bafc862222b3f02cb5d026e08ff9949b.scope - libcontainer container 96b1f5fff7d1c42f1ebf6f72ae65a1b4bafc862222b3f02cb5d026e08ff9949b. Nov 4 12:20:46.038335 containerd[1562]: time="2025-11-04T12:20:46.038300670Z" level=info msg="StartContainer for \"96b1f5fff7d1c42f1ebf6f72ae65a1b4bafc862222b3f02cb5d026e08ff9949b\" returns successfully" Nov 4 12:20:46.521929 kubelet[2700]: E1104 12:20:46.521544 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:20:46.637655 kubelet[2700]: E1104 12:20:46.636739 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:46.648015 kubelet[2700]: I1104 12:20:46.647897 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-849f9b664-l2zmx" podStartSLOduration=2.085837479 podStartE2EDuration="3.647879272s" podCreationTimestamp="2025-11-04 12:20:43 +0000 UTC" firstStartedPulling="2025-11-04 12:20:44.390661766 +0000 UTC m=+23.944170626" lastFinishedPulling="2025-11-04 12:20:45.952703559 +0000 UTC m=+25.506212419" observedRunningTime="2025-11-04 12:20:46.646926557 +0000 UTC m=+26.200435417" watchObservedRunningTime="2025-11-04 12:20:46.647879272 +0000 UTC m=+26.201388132" Nov 4 12:20:46.676745 kubelet[2700]: E1104 12:20:46.676711 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.676745 kubelet[2700]: W1104 12:20:46.676738 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.676915 kubelet[2700]: E1104 12:20:46.676761 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.677291 kubelet[2700]: E1104 12:20:46.676978 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.677291 kubelet[2700]: W1104 12:20:46.676990 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.677291 kubelet[2700]: E1104 12:20:46.677050 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.677291 kubelet[2700]: E1104 12:20:46.677211 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.677291 kubelet[2700]: W1104 12:20:46.677219 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.677291 kubelet[2700]: E1104 12:20:46.677262 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.677594 kubelet[2700]: E1104 12:20:46.677580 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.677594 kubelet[2700]: W1104 12:20:46.677594 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.677648 kubelet[2700]: E1104 12:20:46.677605 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.677756 kubelet[2700]: E1104 12:20:46.677744 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.677756 kubelet[2700]: W1104 12:20:46.677754 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.677809 kubelet[2700]: E1104 12:20:46.677762 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.677882 kubelet[2700]: E1104 12:20:46.677873 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.677905 kubelet[2700]: W1104 12:20:46.677882 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.677905 kubelet[2700]: E1104 12:20:46.677889 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678006 kubelet[2700]: E1104 12:20:46.677997 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678035 kubelet[2700]: W1104 12:20:46.678006 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678035 kubelet[2700]: E1104 12:20:46.678013 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678143 kubelet[2700]: E1104 12:20:46.678132 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678143 kubelet[2700]: W1104 12:20:46.678142 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678202 kubelet[2700]: E1104 12:20:46.678159 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678301 kubelet[2700]: E1104 12:20:46.678290 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678301 kubelet[2700]: W1104 12:20:46.678300 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678350 kubelet[2700]: E1104 12:20:46.678307 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678429 kubelet[2700]: E1104 12:20:46.678420 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678452 kubelet[2700]: W1104 12:20:46.678429 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678452 kubelet[2700]: E1104 12:20:46.678437 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678551 kubelet[2700]: E1104 12:20:46.678541 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678580 kubelet[2700]: W1104 12:20:46.678550 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678580 kubelet[2700]: E1104 12:20:46.678557 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678677 kubelet[2700]: E1104 12:20:46.678667 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678701 kubelet[2700]: W1104 12:20:46.678676 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678701 kubelet[2700]: E1104 12:20:46.678684 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678820 kubelet[2700]: E1104 12:20:46.678810 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678848 kubelet[2700]: W1104 12:20:46.678820 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678848 kubelet[2700]: E1104 12:20:46.678828 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.678952 kubelet[2700]: E1104 12:20:46.678943 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.678977 kubelet[2700]: W1104 12:20:46.678952 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.678977 kubelet[2700]: E1104 12:20:46.678959 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.679075 kubelet[2700]: E1104 12:20:46.679067 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.679117 kubelet[2700]: W1104 12:20:46.679075 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.679117 kubelet[2700]: E1104 12:20:46.679091 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.686918 kubelet[2700]: E1104 12:20:46.686885 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.686918 kubelet[2700]: W1104 12:20:46.686903 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.686918 kubelet[2700]: E1104 12:20:46.686917 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.687135 kubelet[2700]: E1104 12:20:46.687097 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.687135 kubelet[2700]: W1104 12:20:46.687110 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.687135 kubelet[2700]: E1104 12:20:46.687119 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.687306 kubelet[2700]: E1104 12:20:46.687292 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.687306 kubelet[2700]: W1104 12:20:46.687302 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.687349 kubelet[2700]: E1104 12:20:46.687311 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.687556 kubelet[2700]: E1104 12:20:46.687538 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.687583 kubelet[2700]: W1104 12:20:46.687556 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.687583 kubelet[2700]: E1104 12:20:46.687570 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.687744 kubelet[2700]: E1104 12:20:46.687733 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.687770 kubelet[2700]: W1104 12:20:46.687743 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.687770 kubelet[2700]: E1104 12:20:46.687752 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.687889 kubelet[2700]: E1104 12:20:46.687879 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.687916 kubelet[2700]: W1104 12:20:46.687889 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.687916 kubelet[2700]: E1104 12:20:46.687897 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.688125 kubelet[2700]: E1104 12:20:46.688076 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.688125 kubelet[2700]: W1104 12:20:46.688123 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.688186 kubelet[2700]: E1104 12:20:46.688134 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.688381 kubelet[2700]: E1104 12:20:46.688364 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.688381 kubelet[2700]: W1104 12:20:46.688379 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.688431 kubelet[2700]: E1104 12:20:46.688391 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.688547 kubelet[2700]: E1104 12:20:46.688537 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.688570 kubelet[2700]: W1104 12:20:46.688546 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.688570 kubelet[2700]: E1104 12:20:46.688554 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.688695 kubelet[2700]: E1104 12:20:46.688684 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.688723 kubelet[2700]: W1104 12:20:46.688695 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.688723 kubelet[2700]: E1104 12:20:46.688702 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.688841 kubelet[2700]: E1104 12:20:46.688829 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.688865 kubelet[2700]: W1104 12:20:46.688840 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.688865 kubelet[2700]: E1104 12:20:46.688848 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.688991 kubelet[2700]: E1104 12:20:46.688980 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.689019 kubelet[2700]: W1104 12:20:46.688992 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.689019 kubelet[2700]: E1104 12:20:46.689000 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.689174 kubelet[2700]: E1104 12:20:46.689160 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.689200 kubelet[2700]: W1104 12:20:46.689173 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.689200 kubelet[2700]: E1104 12:20:46.689182 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.689405 kubelet[2700]: E1104 12:20:46.689390 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.689434 kubelet[2700]: W1104 12:20:46.689404 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.689434 kubelet[2700]: E1104 12:20:46.689415 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.689544 kubelet[2700]: E1104 12:20:46.689534 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.689566 kubelet[2700]: W1104 12:20:46.689543 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.689566 kubelet[2700]: E1104 12:20:46.689553 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.689727 kubelet[2700]: E1104 12:20:46.689716 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.689752 kubelet[2700]: W1104 12:20:46.689726 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.689752 kubelet[2700]: E1104 12:20:46.689735 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.689951 kubelet[2700]: E1104 12:20:46.689939 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.689976 kubelet[2700]: W1104 12:20:46.689951 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.689976 kubelet[2700]: E1104 12:20:46.689960 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:46.690336 kubelet[2700]: E1104 12:20:46.690321 2700 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 12:20:46.690367 kubelet[2700]: W1104 12:20:46.690335 2700 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 12:20:46.690367 kubelet[2700]: E1104 12:20:46.690345 2700 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 12:20:47.162730 containerd[1562]: time="2025-11-04T12:20:47.162489553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:47.163128 containerd[1562]: time="2025-11-04T12:20:47.163062151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 4 12:20:47.163906 containerd[1562]: time="2025-11-04T12:20:47.163877787Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:47.166111 containerd[1562]: time="2025-11-04T12:20:47.166052216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:47.167457 containerd[1562]: time="2025-11-04T12:20:47.166610573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.213676735s" Nov 4 12:20:47.167457 containerd[1562]: time="2025-11-04T12:20:47.166636133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 4 12:20:47.169802 containerd[1562]: time="2025-11-04T12:20:47.169767158Z" level=info msg="CreateContainer within sandbox \"f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 12:20:47.181103 containerd[1562]: time="2025-11-04T12:20:47.180758584Z" level=info msg="Container 8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:47.189267 containerd[1562]: time="2025-11-04T12:20:47.189223182Z" level=info msg="CreateContainer within sandbox \"f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7\"" Nov 4 12:20:47.189923 containerd[1562]: time="2025-11-04T12:20:47.189865979Z" level=info msg="StartContainer for \"8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7\"" Nov 4 12:20:47.191569 containerd[1562]: time="2025-11-04T12:20:47.191533451Z" level=info msg="connecting to shim 8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7" address="unix:///run/containerd/s/80fad01f35df285b3756d53c8e426ed92c55a48a65c9b2a1cbffb485482afbc7" protocol=ttrpc version=3 Nov 4 12:20:47.216466 systemd[1]: Started cri-containerd-8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7.scope - libcontainer container 8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7. Nov 4 12:20:47.255733 containerd[1562]: time="2025-11-04T12:20:47.255696096Z" level=info msg="StartContainer for \"8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7\" returns successfully" Nov 4 12:20:47.265048 systemd[1]: cri-containerd-8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7.scope: Deactivated successfully. Nov 4 12:20:47.288625 containerd[1562]: time="2025-11-04T12:20:47.288301976Z" level=info msg="received exit event container_id:\"8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7\" id:\"8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7\" pid:3408 exited_at:{seconds:1762258847 nanos:282737564}" Nov 4 12:20:47.288625 containerd[1562]: time="2025-11-04T12:20:47.288517455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7\" id:\"8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7\" pid:3408 exited_at:{seconds:1762258847 nanos:282737564}" Nov 4 12:20:47.318493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a40f6f3e85c824696b930d0b59115ec3a8e25a4d592ac597922fa8aec7f80e7-rootfs.mount: Deactivated successfully. Nov 4 12:20:47.641108 kubelet[2700]: I1104 12:20:47.640424 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 12:20:47.641108 kubelet[2700]: E1104 12:20:47.640737 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:47.649594 kubelet[2700]: E1104 12:20:47.649568 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:47.651328 containerd[1562]: time="2025-11-04T12:20:47.651258356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 12:20:48.520145 kubelet[2700]: E1104 12:20:48.519137 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:20:50.247127 containerd[1562]: time="2025-11-04T12:20:50.246866543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:50.247628 containerd[1562]: time="2025-11-04T12:20:50.247598220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 4 12:20:50.248423 containerd[1562]: time="2025-11-04T12:20:50.248393937Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:50.250631 containerd[1562]: time="2025-11-04T12:20:50.250607487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:50.251286 containerd[1562]: time="2025-11-04T12:20:50.251256804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.599955288s" Nov 4 12:20:50.251286 containerd[1562]: time="2025-11-04T12:20:50.251289044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 4 12:20:50.254692 containerd[1562]: time="2025-11-04T12:20:50.254663069Z" level=info msg="CreateContainer within sandbox \"f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 12:20:50.263393 containerd[1562]: time="2025-11-04T12:20:50.263344032Z" level=info msg="Container ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:50.265879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873716627.mount: Deactivated successfully. Nov 4 12:20:50.272283 containerd[1562]: time="2025-11-04T12:20:50.272234033Z" level=info msg="CreateContainer within sandbox \"f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863\"" Nov 4 12:20:50.272816 containerd[1562]: time="2025-11-04T12:20:50.272792031Z" level=info msg="StartContainer for \"ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863\"" Nov 4 12:20:50.277629 containerd[1562]: time="2025-11-04T12:20:50.277594410Z" level=info msg="connecting to shim ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863" address="unix:///run/containerd/s/80fad01f35df285b3756d53c8e426ed92c55a48a65c9b2a1cbffb485482afbc7" protocol=ttrpc version=3 Nov 4 12:20:50.303277 systemd[1]: Started cri-containerd-ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863.scope - libcontainer container ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863. Nov 4 12:20:50.351102 containerd[1562]: time="2025-11-04T12:20:50.351054970Z" level=info msg="StartContainer for \"ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863\" returns successfully" Nov 4 12:20:50.519978 kubelet[2700]: E1104 12:20:50.519860 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:20:50.652187 kubelet[2700]: E1104 12:20:50.651491 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:50.859773 systemd[1]: cri-containerd-ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863.scope: Deactivated successfully. Nov 4 12:20:50.860053 systemd[1]: cri-containerd-ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863.scope: Consumed 462ms CPU time, 177.8M memory peak, 2.3M read from disk, 165.9M written to disk. Nov 4 12:20:50.872858 containerd[1562]: time="2025-11-04T12:20:50.872812462Z" level=info msg="received exit event container_id:\"ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863\" id:\"ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863\" pid:3467 exited_at:{seconds:1762258850 nanos:872601142}" Nov 4 12:20:50.872989 containerd[1562]: time="2025-11-04T12:20:50.872890381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863\" id:\"ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863\" pid:3467 exited_at:{seconds:1762258850 nanos:872601142}" Nov 4 12:20:50.889803 kubelet[2700]: I1104 12:20:50.888895 2700 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 12:20:50.891519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef09ad39570eca1e2e94b8ab11945143f6e600c9691a5a4d6ef6694576d12863-rootfs.mount: Deactivated successfully. Nov 4 12:20:50.962616 systemd[1]: Created slice kubepods-burstable-pod84f2fa61_0710_4d49_a317_ce2fe80e0242.slice - libcontainer container kubepods-burstable-pod84f2fa61_0710_4d49_a317_ce2fe80e0242.slice. Nov 4 12:20:50.971529 systemd[1]: Created slice kubepods-besteffort-podec4bc564_6f37_4bcf_aa99_073adb5a7f1c.slice - libcontainer container kubepods-besteffort-podec4bc564_6f37_4bcf_aa99_073adb5a7f1c.slice. Nov 4 12:20:50.980619 systemd[1]: Created slice kubepods-besteffort-pod0e882339_ed27_44c5_8412_376423a1bb7c.slice - libcontainer container kubepods-besteffort-pod0e882339_ed27_44c5_8412_376423a1bb7c.slice. Nov 4 12:20:50.987039 systemd[1]: Created slice kubepods-burstable-pod2006762c_6423_4745_9e5c_3ba279f65ad7.slice - libcontainer container kubepods-burstable-pod2006762c_6423_4745_9e5c_3ba279f65ad7.slice. Nov 4 12:20:50.991824 systemd[1]: Created slice kubepods-besteffort-podb4ff7fc7_ff2d_4f65_af99_cb993f59efe6.slice - libcontainer container kubepods-besteffort-podb4ff7fc7_ff2d_4f65_af99_cb993f59efe6.slice. Nov 4 12:20:50.998841 systemd[1]: Created slice kubepods-besteffort-pod13e1fa9a_e131_4fe2_8e0a_623c05fa039d.slice - libcontainer container kubepods-besteffort-pod13e1fa9a_e131_4fe2_8e0a_623c05fa039d.slice. Nov 4 12:20:51.002623 systemd[1]: Created slice kubepods-besteffort-pod27d501a2_434d_4c01_adef_352f89d7e050.slice - libcontainer container kubepods-besteffort-pod27d501a2_434d_4c01_adef_352f89d7e050.slice. Nov 4 12:20:51.018702 kubelet[2700]: I1104 12:20:51.018652 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlslv\" (UniqueName: \"kubernetes.io/projected/13e1fa9a-e131-4fe2-8e0a-623c05fa039d-kube-api-access-tlslv\") pod \"calico-apiserver-56dfc9fd7-8lpx9\" (UID: \"13e1fa9a-e131-4fe2-8e0a-623c05fa039d\") " pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" Nov 4 12:20:51.018840 kubelet[2700]: I1104 12:20:51.018696 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d501a2-434d-4c01-adef-352f89d7e050-config\") pod \"goldmane-7c778bb748-42zmt\" (UID: \"27d501a2-434d-4c01-adef-352f89d7e050\") " pod="calico-system/goldmane-7c778bb748-42zmt" Nov 4 12:20:51.018840 kubelet[2700]: I1104 12:20:51.018745 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9l9\" (UniqueName: \"kubernetes.io/projected/ec4bc564-6f37-4bcf-aa99-073adb5a7f1c-kube-api-access-xd9l9\") pod \"calico-kube-controllers-8577ffc656-mj25s\" (UID: \"ec4bc564-6f37-4bcf-aa99-073adb5a7f1c\") " pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" Nov 4 12:20:51.018840 kubelet[2700]: I1104 12:20:51.018788 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/27d501a2-434d-4c01-adef-352f89d7e050-goldmane-key-pair\") pod \"goldmane-7c778bb748-42zmt\" (UID: \"27d501a2-434d-4c01-adef-352f89d7e050\") " pod="calico-system/goldmane-7c778bb748-42zmt" Nov 4 12:20:51.018909 kubelet[2700]: I1104 12:20:51.018835 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-backend-key-pair\") pod \"whisker-6dc6d954bb-c5q7f\" (UID: \"0e882339-ed27-44c5-8412-376423a1bb7c\") " pod="calico-system/whisker-6dc6d954bb-c5q7f" Nov 4 12:20:51.018909 kubelet[2700]: I1104 12:20:51.018872 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2006762c-6423-4745-9e5c-3ba279f65ad7-config-volume\") pod \"coredns-66bc5c9577-kzj5x\" (UID: \"2006762c-6423-4745-9e5c-3ba279f65ad7\") " pod="kube-system/coredns-66bc5c9577-kzj5x" Nov 4 12:20:51.018982 kubelet[2700]: I1104 12:20:51.018939 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkn9q\" (UniqueName: \"kubernetes.io/projected/84f2fa61-0710-4d49-a317-ce2fe80e0242-kube-api-access-xkn9q\") pod \"coredns-66bc5c9577-dmb25\" (UID: \"84f2fa61-0710-4d49-a317-ce2fe80e0242\") " pod="kube-system/coredns-66bc5c9577-dmb25" Nov 4 12:20:51.019015 kubelet[2700]: I1104 12:20:51.018984 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec4bc564-6f37-4bcf-aa99-073adb5a7f1c-tigera-ca-bundle\") pod \"calico-kube-controllers-8577ffc656-mj25s\" (UID: \"ec4bc564-6f37-4bcf-aa99-073adb5a7f1c\") " pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" Nov 4 12:20:51.019015 kubelet[2700]: I1104 12:20:51.019001 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tvr\" (UniqueName: \"kubernetes.io/projected/27d501a2-434d-4c01-adef-352f89d7e050-kube-api-access-67tvr\") pod \"goldmane-7c778bb748-42zmt\" (UID: \"27d501a2-434d-4c01-adef-352f89d7e050\") " pod="calico-system/goldmane-7c778bb748-42zmt" Nov 4 12:20:51.019646 kubelet[2700]: I1104 12:20:51.019115 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4ff7fc7-ff2d-4f65-af99-cb993f59efe6-calico-apiserver-certs\") pod \"calico-apiserver-56dfc9fd7-xr6bn\" (UID: \"b4ff7fc7-ff2d-4f65-af99-cb993f59efe6\") " pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" Nov 4 12:20:51.019646 kubelet[2700]: I1104 12:20:51.019144 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-ca-bundle\") pod \"whisker-6dc6d954bb-c5q7f\" (UID: \"0e882339-ed27-44c5-8412-376423a1bb7c\") " pod="calico-system/whisker-6dc6d954bb-c5q7f" Nov 4 12:20:51.019646 kubelet[2700]: I1104 12:20:51.019202 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8g59\" (UniqueName: \"kubernetes.io/projected/2006762c-6423-4745-9e5c-3ba279f65ad7-kube-api-access-t8g59\") pod \"coredns-66bc5c9577-kzj5x\" (UID: \"2006762c-6423-4745-9e5c-3ba279f65ad7\") " pod="kube-system/coredns-66bc5c9577-kzj5x" Nov 4 12:20:51.019646 kubelet[2700]: I1104 12:20:51.019243 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xlq\" (UniqueName: \"kubernetes.io/projected/b4ff7fc7-ff2d-4f65-af99-cb993f59efe6-kube-api-access-z5xlq\") pod \"calico-apiserver-56dfc9fd7-xr6bn\" (UID: \"b4ff7fc7-ff2d-4f65-af99-cb993f59efe6\") " pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" Nov 4 12:20:51.019646 kubelet[2700]: I1104 12:20:51.019273 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13e1fa9a-e131-4fe2-8e0a-623c05fa039d-calico-apiserver-certs\") pod \"calico-apiserver-56dfc9fd7-8lpx9\" (UID: \"13e1fa9a-e131-4fe2-8e0a-623c05fa039d\") " pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" Nov 4 12:20:51.020551 kubelet[2700]: I1104 12:20:51.019290 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhg4\" (UniqueName: \"kubernetes.io/projected/0e882339-ed27-44c5-8412-376423a1bb7c-kube-api-access-5zhg4\") pod \"whisker-6dc6d954bb-c5q7f\" (UID: \"0e882339-ed27-44c5-8412-376423a1bb7c\") " pod="calico-system/whisker-6dc6d954bb-c5q7f" Nov 4 12:20:51.020551 kubelet[2700]: I1104 12:20:51.019326 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27d501a2-434d-4c01-adef-352f89d7e050-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-42zmt\" (UID: \"27d501a2-434d-4c01-adef-352f89d7e050\") " pod="calico-system/goldmane-7c778bb748-42zmt" Nov 4 12:20:51.020843 kubelet[2700]: I1104 12:20:51.019347 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84f2fa61-0710-4d49-a317-ce2fe80e0242-config-volume\") pod \"coredns-66bc5c9577-dmb25\" (UID: \"84f2fa61-0710-4d49-a317-ce2fe80e0242\") " pod="kube-system/coredns-66bc5c9577-dmb25" Nov 4 12:20:51.269951 kubelet[2700]: E1104 12:20:51.269915 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:51.270638 containerd[1562]: time="2025-11-04T12:20:51.270601776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmb25,Uid:84f2fa61-0710-4d49-a317-ce2fe80e0242,Namespace:kube-system,Attempt:0,}" Nov 4 12:20:51.278275 containerd[1562]: time="2025-11-04T12:20:51.278241904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8577ffc656-mj25s,Uid:ec4bc564-6f37-4bcf-aa99-073adb5a7f1c,Namespace:calico-system,Attempt:0,}" Nov 4 12:20:51.287391 containerd[1562]: time="2025-11-04T12:20:51.287226986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc6d954bb-c5q7f,Uid:0e882339-ed27-44c5-8412-376423a1bb7c,Namespace:calico-system,Attempt:0,}" Nov 4 12:20:51.291797 kubelet[2700]: E1104 12:20:51.291759 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:51.297990 containerd[1562]: time="2025-11-04T12:20:51.297452183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzj5x,Uid:2006762c-6423-4745-9e5c-3ba279f65ad7,Namespace:kube-system,Attempt:0,}" Nov 4 12:20:51.299419 containerd[1562]: time="2025-11-04T12:20:51.299363895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-xr6bn,Uid:b4ff7fc7-ff2d-4f65-af99-cb993f59efe6,Namespace:calico-apiserver,Attempt:0,}" Nov 4 12:20:51.304445 containerd[1562]: time="2025-11-04T12:20:51.304394754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-8lpx9,Uid:13e1fa9a-e131-4fe2-8e0a-623c05fa039d,Namespace:calico-apiserver,Attempt:0,}" Nov 4 12:20:51.313416 containerd[1562]: time="2025-11-04T12:20:51.313379317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-42zmt,Uid:27d501a2-434d-4c01-adef-352f89d7e050,Namespace:calico-system,Attempt:0,}" Nov 4 12:20:51.399434 containerd[1562]: time="2025-11-04T12:20:51.399375277Z" level=error msg="Failed to destroy network for sandbox \"60466d16aef2f8e48346557b92ff569861d1d6d5ccbc6dbf9bb58879b36b7b41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.401137 containerd[1562]: time="2025-11-04T12:20:51.401073230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8577ffc656-mj25s,Uid:ec4bc564-6f37-4bcf-aa99-073adb5a7f1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60466d16aef2f8e48346557b92ff569861d1d6d5ccbc6dbf9bb58879b36b7b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.401389 kubelet[2700]: E1104 12:20:51.401325 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60466d16aef2f8e48346557b92ff569861d1d6d5ccbc6dbf9bb58879b36b7b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.401468 kubelet[2700]: E1104 12:20:51.401414 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60466d16aef2f8e48346557b92ff569861d1d6d5ccbc6dbf9bb58879b36b7b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" Nov 4 12:20:51.401468 kubelet[2700]: E1104 12:20:51.401434 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60466d16aef2f8e48346557b92ff569861d1d6d5ccbc6dbf9bb58879b36b7b41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" Nov 4 12:20:51.401516 kubelet[2700]: E1104 12:20:51.401487 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8577ffc656-mj25s_calico-system(ec4bc564-6f37-4bcf-aa99-073adb5a7f1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8577ffc656-mj25s_calico-system(ec4bc564-6f37-4bcf-aa99-073adb5a7f1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60466d16aef2f8e48346557b92ff569861d1d6d5ccbc6dbf9bb58879b36b7b41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" podUID="ec4bc564-6f37-4bcf-aa99-073adb5a7f1c" Nov 4 12:20:51.409886 containerd[1562]: time="2025-11-04T12:20:51.409830353Z" level=error msg="Failed to destroy network for sandbox \"56f2dacbe983866afc0bf45c585582d4fe9ce92228d4534cc49d8eb334e5c973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.411147 containerd[1562]: time="2025-11-04T12:20:51.411078028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmb25,Uid:84f2fa61-0710-4d49-a317-ce2fe80e0242,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2dacbe983866afc0bf45c585582d4fe9ce92228d4534cc49d8eb334e5c973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.412034 kubelet[2700]: E1104 12:20:51.411408 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2dacbe983866afc0bf45c585582d4fe9ce92228d4534cc49d8eb334e5c973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.412034 kubelet[2700]: E1104 12:20:51.411473 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2dacbe983866afc0bf45c585582d4fe9ce92228d4534cc49d8eb334e5c973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dmb25" Nov 4 12:20:51.412034 kubelet[2700]: E1104 12:20:51.411495 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56f2dacbe983866afc0bf45c585582d4fe9ce92228d4534cc49d8eb334e5c973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dmb25" Nov 4 12:20:51.412194 kubelet[2700]: E1104 12:20:51.411546 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dmb25_kube-system(84f2fa61-0710-4d49-a317-ce2fe80e0242)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dmb25_kube-system(84f2fa61-0710-4d49-a317-ce2fe80e0242)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56f2dacbe983866afc0bf45c585582d4fe9ce92228d4534cc49d8eb334e5c973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dmb25" podUID="84f2fa61-0710-4d49-a317-ce2fe80e0242" Nov 4 12:20:51.416930 containerd[1562]: time="2025-11-04T12:20:51.416893084Z" level=error msg="Failed to destroy network for sandbox \"ae5f6bd7df04595f2c84b58ee7d31da9603d4896c19018db6119742acb352bf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.418251 containerd[1562]: time="2025-11-04T12:20:51.418183878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc6d954bb-c5q7f,Uid:0e882339-ed27-44c5-8412-376423a1bb7c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5f6bd7df04595f2c84b58ee7d31da9603d4896c19018db6119742acb352bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.418251 containerd[1562]: time="2025-11-04T12:20:51.418242798Z" level=error msg="Failed to destroy network for sandbox \"3e45ffe429eaa809a625ecc68dfdc5636962825f91f24e3a7b9a70679dae4e39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.418478 kubelet[2700]: E1104 12:20:51.418437 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5f6bd7df04595f2c84b58ee7d31da9603d4896c19018db6119742acb352bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.418524 kubelet[2700]: E1104 12:20:51.418493 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5f6bd7df04595f2c84b58ee7d31da9603d4896c19018db6119742acb352bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dc6d954bb-c5q7f" Nov 4 12:20:51.418819 kubelet[2700]: E1104 12:20:51.418784 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5f6bd7df04595f2c84b58ee7d31da9603d4896c19018db6119742acb352bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dc6d954bb-c5q7f" Nov 4 12:20:51.418904 kubelet[2700]: E1104 12:20:51.418850 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dc6d954bb-c5q7f_calico-system(0e882339-ed27-44c5-8412-376423a1bb7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dc6d954bb-c5q7f_calico-system(0e882339-ed27-44c5-8412-376423a1bb7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae5f6bd7df04595f2c84b58ee7d31da9603d4896c19018db6119742acb352bf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dc6d954bb-c5q7f" podUID="0e882339-ed27-44c5-8412-376423a1bb7c" Nov 4 12:20:51.419218 containerd[1562]: time="2025-11-04T12:20:51.419158554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-8lpx9,Uid:13e1fa9a-e131-4fe2-8e0a-623c05fa039d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e45ffe429eaa809a625ecc68dfdc5636962825f91f24e3a7b9a70679dae4e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.420180 kubelet[2700]: E1104 12:20:51.419331 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e45ffe429eaa809a625ecc68dfdc5636962825f91f24e3a7b9a70679dae4e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.420180 kubelet[2700]: E1104 12:20:51.419367 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e45ffe429eaa809a625ecc68dfdc5636962825f91f24e3a7b9a70679dae4e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" Nov 4 12:20:51.420180 kubelet[2700]: E1104 12:20:51.419382 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e45ffe429eaa809a625ecc68dfdc5636962825f91f24e3a7b9a70679dae4e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" Nov 4 12:20:51.420260 kubelet[2700]: E1104 12:20:51.419421 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56dfc9fd7-8lpx9_calico-apiserver(13e1fa9a-e131-4fe2-8e0a-623c05fa039d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56dfc9fd7-8lpx9_calico-apiserver(13e1fa9a-e131-4fe2-8e0a-623c05fa039d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e45ffe429eaa809a625ecc68dfdc5636962825f91f24e3a7b9a70679dae4e39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" podUID="13e1fa9a-e131-4fe2-8e0a-623c05fa039d" Nov 4 12:20:51.428426 containerd[1562]: time="2025-11-04T12:20:51.428375235Z" level=error msg="Failed to destroy network for sandbox \"3f12a1c77f0399c9f56e11af454c36bda2d6ea80c8dd6cd4c7f3574b05f8ce33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.431458 containerd[1562]: time="2025-11-04T12:20:51.431414783Z" level=error msg="Failed to destroy network for sandbox \"e9cc131ea6c3a6e2413c55543daba13482c9b3d32388fd54be54e96423d22b14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.432022 containerd[1562]: time="2025-11-04T12:20:51.431988300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-xr6bn,Uid:b4ff7fc7-ff2d-4f65-af99-cb993f59efe6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f12a1c77f0399c9f56e11af454c36bda2d6ea80c8dd6cd4c7f3574b05f8ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.432280 kubelet[2700]: E1104 12:20:51.432212 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f12a1c77f0399c9f56e11af454c36bda2d6ea80c8dd6cd4c7f3574b05f8ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.432280 kubelet[2700]: E1104 12:20:51.432273 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f12a1c77f0399c9f56e11af454c36bda2d6ea80c8dd6cd4c7f3574b05f8ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" Nov 4 12:20:51.432354 kubelet[2700]: E1104 12:20:51.432291 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f12a1c77f0399c9f56e11af454c36bda2d6ea80c8dd6cd4c7f3574b05f8ce33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" Nov 4 12:20:51.432382 kubelet[2700]: E1104 12:20:51.432342 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56dfc9fd7-xr6bn_calico-apiserver(b4ff7fc7-ff2d-4f65-af99-cb993f59efe6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56dfc9fd7-xr6bn_calico-apiserver(b4ff7fc7-ff2d-4f65-af99-cb993f59efe6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f12a1c77f0399c9f56e11af454c36bda2d6ea80c8dd6cd4c7f3574b05f8ce33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" podUID="b4ff7fc7-ff2d-4f65-af99-cb993f59efe6" Nov 4 12:20:51.433032 containerd[1562]: time="2025-11-04T12:20:51.432998736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzj5x,Uid:2006762c-6423-4745-9e5c-3ba279f65ad7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9cc131ea6c3a6e2413c55543daba13482c9b3d32388fd54be54e96423d22b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.433383 kubelet[2700]: E1104 12:20:51.433207 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9cc131ea6c3a6e2413c55543daba13482c9b3d32388fd54be54e96423d22b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.433383 kubelet[2700]: E1104 12:20:51.433242 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9cc131ea6c3a6e2413c55543daba13482c9b3d32388fd54be54e96423d22b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kzj5x" Nov 4 12:20:51.433383 kubelet[2700]: E1104 12:20:51.433258 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9cc131ea6c3a6e2413c55543daba13482c9b3d32388fd54be54e96423d22b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kzj5x" Nov 4 12:20:51.433481 kubelet[2700]: E1104 12:20:51.433297 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kzj5x_kube-system(2006762c-6423-4745-9e5c-3ba279f65ad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kzj5x_kube-system(2006762c-6423-4745-9e5c-3ba279f65ad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9cc131ea6c3a6e2413c55543daba13482c9b3d32388fd54be54e96423d22b14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kzj5x" podUID="2006762c-6423-4745-9e5c-3ba279f65ad7" Nov 4 12:20:51.437626 containerd[1562]: time="2025-11-04T12:20:51.437590317Z" level=error msg="Failed to destroy network for sandbox \"342a2c665c7143927c9972508807e9532d43023ffb0dd158109a1d30a4bc4629\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.438963 containerd[1562]: time="2025-11-04T12:20:51.438929511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-42zmt,Uid:27d501a2-434d-4c01-adef-352f89d7e050,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"342a2c665c7143927c9972508807e9532d43023ffb0dd158109a1d30a4bc4629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.439141 kubelet[2700]: E1104 12:20:51.439113 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342a2c665c7143927c9972508807e9532d43023ffb0dd158109a1d30a4bc4629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:51.439205 kubelet[2700]: E1104 12:20:51.439151 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342a2c665c7143927c9972508807e9532d43023ffb0dd158109a1d30a4bc4629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-42zmt" Nov 4 12:20:51.439205 kubelet[2700]: E1104 12:20:51.439170 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"342a2c665c7143927c9972508807e9532d43023ffb0dd158109a1d30a4bc4629\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-42zmt" Nov 4 12:20:51.439253 kubelet[2700]: E1104 12:20:51.439209 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-42zmt_calico-system(27d501a2-434d-4c01-adef-352f89d7e050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-42zmt_calico-system(27d501a2-434d-4c01-adef-352f89d7e050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"342a2c665c7143927c9972508807e9532d43023ffb0dd158109a1d30a4bc4629\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-42zmt" podUID="27d501a2-434d-4c01-adef-352f89d7e050" Nov 4 12:20:51.656493 kubelet[2700]: E1104 12:20:51.656457 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:51.657966 containerd[1562]: time="2025-11-04T12:20:51.657931755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 12:20:52.263763 systemd[1]: run-netns-cni\x2dd42e6b16\x2dc99d\x2d0a8b\x2dc12b\x2d9b2e4171bcdb.mount: Deactivated successfully. Nov 4 12:20:52.263854 systemd[1]: run-netns-cni\x2d99cbca05\x2ddec7\x2d982d\x2d48d3\x2ddbdbfa036075.mount: Deactivated successfully. Nov 4 12:20:52.263901 systemd[1]: run-netns-cni\x2dd03a267e\x2ddbdc\x2d9659\x2da5a4\x2d65771afc7fd8.mount: Deactivated successfully. Nov 4 12:20:52.263948 systemd[1]: run-netns-cni\x2dd9e738b6\x2d76f4\x2d2332\x2d666b\x2d0a703c19c34e.mount: Deactivated successfully. Nov 4 12:20:52.526858 systemd[1]: Created slice kubepods-besteffort-pod5213a2cb_c20a_4f3b_8d44_0dd43d58dc01.slice - libcontainer container kubepods-besteffort-pod5213a2cb_c20a_4f3b_8d44_0dd43d58dc01.slice. Nov 4 12:20:52.530104 containerd[1562]: time="2025-11-04T12:20:52.530047465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvvz6,Uid:5213a2cb-c20a-4f3b-8d44-0dd43d58dc01,Namespace:calico-system,Attempt:0,}" Nov 4 12:20:52.572888 containerd[1562]: time="2025-11-04T12:20:52.572753293Z" level=error msg="Failed to destroy network for sandbox \"406cf39722d686d19b8b38ad80829f1663391ec2e561945089daec8a464736d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:52.574103 containerd[1562]: time="2025-11-04T12:20:52.573976528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvvz6,Uid:5213a2cb-c20a-4f3b-8d44-0dd43d58dc01,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"406cf39722d686d19b8b38ad80829f1663391ec2e561945089daec8a464736d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:52.574707 kubelet[2700]: E1104 12:20:52.574350 2700 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406cf39722d686d19b8b38ad80829f1663391ec2e561945089daec8a464736d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 12:20:52.574707 kubelet[2700]: E1104 12:20:52.574401 2700 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406cf39722d686d19b8b38ad80829f1663391ec2e561945089daec8a464736d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvvz6" Nov 4 12:20:52.574707 kubelet[2700]: E1104 12:20:52.574419 2700 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406cf39722d686d19b8b38ad80829f1663391ec2e561945089daec8a464736d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvvz6" Nov 4 12:20:52.574496 systemd[1]: run-netns-cni\x2d0ffde6ea\x2d6f20\x2de139\x2d942d\x2dbe3858e11d64.mount: Deactivated successfully. Nov 4 12:20:52.574912 kubelet[2700]: E1104 12:20:52.574468 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mvvz6_calico-system(5213a2cb-c20a-4f3b-8d44-0dd43d58dc01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mvvz6_calico-system(5213a2cb-c20a-4f3b-8d44-0dd43d58dc01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"406cf39722d686d19b8b38ad80829f1663391ec2e561945089daec8a464736d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:20:55.584640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761862731.mount: Deactivated successfully. Nov 4 12:20:55.633194 containerd[1562]: time="2025-11-04T12:20:55.633139147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 4 12:20:55.636914 containerd[1562]: time="2025-11-04T12:20:55.636791333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.978822938s" Nov 4 12:20:55.636914 containerd[1562]: time="2025-11-04T12:20:55.636824693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 4 12:20:55.645227 containerd[1562]: time="2025-11-04T12:20:55.645184103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:55.645752 containerd[1562]: time="2025-11-04T12:20:55.645718901Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:55.646437 containerd[1562]: time="2025-11-04T12:20:55.646259779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:20:55.654018 containerd[1562]: time="2025-11-04T12:20:55.653976631Z" level=info msg="CreateContainer within sandbox \"f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 12:20:55.664679 containerd[1562]: time="2025-11-04T12:20:55.663514436Z" level=info msg="Container 9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:20:55.673589 containerd[1562]: time="2025-11-04T12:20:55.673546040Z" level=info msg="CreateContainer within sandbox \"f743ec4a3deb49bdaee6829c3216a8536812be33968fc0e349b4d1db17927715\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce\"" Nov 4 12:20:55.675380 containerd[1562]: time="2025-11-04T12:20:55.675351273Z" level=info msg="StartContainer for \"9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce\"" Nov 4 12:20:55.679332 containerd[1562]: time="2025-11-04T12:20:55.679279499Z" level=info msg="connecting to shim 9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce" address="unix:///run/containerd/s/80fad01f35df285b3756d53c8e426ed92c55a48a65c9b2a1cbffb485482afbc7" protocol=ttrpc version=3 Nov 4 12:20:55.698319 systemd[1]: Started cri-containerd-9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce.scope - libcontainer container 9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce. Nov 4 12:20:55.777324 containerd[1562]: time="2025-11-04T12:20:55.777265743Z" level=info msg="StartContainer for \"9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce\" returns successfully" Nov 4 12:20:55.897054 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 12:20:55.897166 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 12:20:56.054337 kubelet[2700]: I1104 12:20:56.054288 2700 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-ca-bundle\") pod \"0e882339-ed27-44c5-8412-376423a1bb7c\" (UID: \"0e882339-ed27-44c5-8412-376423a1bb7c\") " Nov 4 12:20:56.054337 kubelet[2700]: I1104 12:20:56.054344 2700 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zhg4\" (UniqueName: \"kubernetes.io/projected/0e882339-ed27-44c5-8412-376423a1bb7c-kube-api-access-5zhg4\") pod \"0e882339-ed27-44c5-8412-376423a1bb7c\" (UID: \"0e882339-ed27-44c5-8412-376423a1bb7c\") " Nov 4 12:20:56.055210 kubelet[2700]: I1104 12:20:56.054366 2700 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-backend-key-pair\") pod \"0e882339-ed27-44c5-8412-376423a1bb7c\" (UID: \"0e882339-ed27-44c5-8412-376423a1bb7c\") " Nov 4 12:20:56.062977 kubelet[2700]: I1104 12:20:56.062767 2700 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e882339-ed27-44c5-8412-376423a1bb7c-kube-api-access-5zhg4" (OuterVolumeSpecName: "kube-api-access-5zhg4") pod "0e882339-ed27-44c5-8412-376423a1bb7c" (UID: "0e882339-ed27-44c5-8412-376423a1bb7c"). InnerVolumeSpecName "kube-api-access-5zhg4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 12:20:56.065016 kubelet[2700]: I1104 12:20:56.064948 2700 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0e882339-ed27-44c5-8412-376423a1bb7c" (UID: "0e882339-ed27-44c5-8412-376423a1bb7c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 12:20:56.070367 kubelet[2700]: I1104 12:20:56.070304 2700 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0e882339-ed27-44c5-8412-376423a1bb7c" (UID: "0e882339-ed27-44c5-8412-376423a1bb7c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 12:20:56.155475 kubelet[2700]: I1104 12:20:56.155341 2700 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 12:20:56.155475 kubelet[2700]: I1104 12:20:56.155375 2700 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zhg4\" (UniqueName: \"kubernetes.io/projected/0e882339-ed27-44c5-8412-376423a1bb7c-kube-api-access-5zhg4\") on node \"localhost\" DevicePath \"\"" Nov 4 12:20:56.155475 kubelet[2700]: I1104 12:20:56.155385 2700 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e882339-ed27-44c5-8412-376423a1bb7c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 12:20:56.524940 systemd[1]: Removed slice kubepods-besteffort-pod0e882339_ed27_44c5_8412_376423a1bb7c.slice - libcontainer container kubepods-besteffort-pod0e882339_ed27_44c5_8412_376423a1bb7c.slice. Nov 4 12:20:56.584739 systemd[1]: var-lib-kubelet-pods-0e882339\x2ded27\x2d44c5\x2d8412\x2d376423a1bb7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zhg4.mount: Deactivated successfully. Nov 4 12:20:56.584850 systemd[1]: var-lib-kubelet-pods-0e882339\x2ded27\x2d44c5\x2d8412\x2d376423a1bb7c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 12:20:56.681274 kubelet[2700]: E1104 12:20:56.681242 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:56.709788 kubelet[2700]: I1104 12:20:56.708600 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-db9pn" podStartSLOduration=1.573278219 podStartE2EDuration="12.708584006s" podCreationTimestamp="2025-11-04 12:20:44 +0000 UTC" firstStartedPulling="2025-11-04 12:20:44.502204944 +0000 UTC m=+24.055713764" lastFinishedPulling="2025-11-04 12:20:55.637510691 +0000 UTC m=+35.191019551" observedRunningTime="2025-11-04 12:20:56.698334082 +0000 UTC m=+36.251842942" watchObservedRunningTime="2025-11-04 12:20:56.708584006 +0000 UTC m=+36.262092866" Nov 4 12:20:56.759613 systemd[1]: Created slice kubepods-besteffort-pod8616b2d0_9f60_46fa_9838_630417416267.slice - libcontainer container kubepods-besteffort-pod8616b2d0_9f60_46fa_9838_630417416267.slice. Nov 4 12:20:56.860995 kubelet[2700]: I1104 12:20:56.860881 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg59\" (UniqueName: \"kubernetes.io/projected/8616b2d0-9f60-46fa-9838-630417416267-kube-api-access-slg59\") pod \"whisker-7b66cf4bbd-klt24\" (UID: \"8616b2d0-9f60-46fa-9838-630417416267\") " pod="calico-system/whisker-7b66cf4bbd-klt24" Nov 4 12:20:56.861271 kubelet[2700]: I1104 12:20:56.861179 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8616b2d0-9f60-46fa-9838-630417416267-whisker-ca-bundle\") pod \"whisker-7b66cf4bbd-klt24\" (UID: \"8616b2d0-9f60-46fa-9838-630417416267\") " pod="calico-system/whisker-7b66cf4bbd-klt24" Nov 4 12:20:56.861271 kubelet[2700]: I1104 12:20:56.861215 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8616b2d0-9f60-46fa-9838-630417416267-whisker-backend-key-pair\") pod \"whisker-7b66cf4bbd-klt24\" (UID: \"8616b2d0-9f60-46fa-9838-630417416267\") " pod="calico-system/whisker-7b66cf4bbd-klt24" Nov 4 12:20:57.068542 containerd[1562]: time="2025-11-04T12:20:57.068474710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b66cf4bbd-klt24,Uid:8616b2d0-9f60-46fa-9838-630417416267,Namespace:calico-system,Attempt:0,}" Nov 4 12:20:57.246733 systemd-networkd[1467]: cali2f8aaf64c68: Link UP Nov 4 12:20:57.246957 systemd-networkd[1467]: cali2f8aaf64c68: Gained carrier Nov 4 12:20:57.265679 containerd[1562]: 2025-11-04 12:20:57.091 [INFO][3844] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 12:20:57.265679 containerd[1562]: 2025-11-04 12:20:57.124 [INFO][3844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b66cf4bbd--klt24-eth0 whisker-7b66cf4bbd- calico-system 8616b2d0-9f60-46fa-9838-630417416267 940 0 2025-11-04 12:20:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b66cf4bbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b66cf4bbd-klt24 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2f8aaf64c68 [] [] }} ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-" Nov 4 12:20:57.265679 containerd[1562]: 2025-11-04 12:20:57.124 [INFO][3844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" Nov 4 12:20:57.265679 containerd[1562]: 2025-11-04 12:20:57.184 [INFO][3859] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" HandleID="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Workload="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.184 [INFO][3859] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" HandleID="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Workload="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b66cf4bbd-klt24", "timestamp":"2025-11-04 12:20:57.184370036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.184 [INFO][3859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.184 [INFO][3859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.184 [INFO][3859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.195 [INFO][3859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" host="localhost" Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.203 [INFO][3859] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.208 [INFO][3859] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.211 [INFO][3859] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.214 [INFO][3859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:20:57.266101 containerd[1562]: 2025-11-04 12:20:57.215 [INFO][3859] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" host="localhost" Nov 4 12:20:57.267474 containerd[1562]: 2025-11-04 12:20:57.216 [INFO][3859] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384 Nov 4 12:20:57.267474 containerd[1562]: 2025-11-04 12:20:57.220 [INFO][3859] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" host="localhost" Nov 4 12:20:57.267474 containerd[1562]: 2025-11-04 12:20:57.225 [INFO][3859] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" host="localhost" Nov 4 12:20:57.267474 containerd[1562]: 2025-11-04 12:20:57.225 [INFO][3859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" host="localhost" Nov 4 12:20:57.267474 containerd[1562]: 2025-11-04 12:20:57.225 [INFO][3859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:20:57.267474 containerd[1562]: 2025-11-04 12:20:57.225 [INFO][3859] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" HandleID="k8s-pod-network.9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Workload="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" Nov 4 12:20:57.267592 containerd[1562]: 2025-11-04 12:20:57.230 [INFO][3844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b66cf4bbd--klt24-eth0", GenerateName:"whisker-7b66cf4bbd-", Namespace:"calico-system", SelfLink:"", UID:"8616b2d0-9f60-46fa-9838-630417416267", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b66cf4bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b66cf4bbd-klt24", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f8aaf64c68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:20:57.267592 containerd[1562]: 2025-11-04 12:20:57.230 [INFO][3844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" Nov 4 12:20:57.267677 containerd[1562]: 2025-11-04 12:20:57.230 [INFO][3844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f8aaf64c68 ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" Nov 4 12:20:57.267677 containerd[1562]: 2025-11-04 12:20:57.242 [INFO][3844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" Nov 4 12:20:57.267714 containerd[1562]: 2025-11-04 12:20:57.243 [INFO][3844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b66cf4bbd--klt24-eth0", GenerateName:"whisker-7b66cf4bbd-", Namespace:"calico-system", SelfLink:"", UID:"8616b2d0-9f60-46fa-9838-630417416267", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b66cf4bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384", Pod:"whisker-7b66cf4bbd-klt24", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f8aaf64c68", MAC:"1e:dd:9c:ef:38:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:20:57.267759 containerd[1562]: 2025-11-04 12:20:57.260 [INFO][3844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" Namespace="calico-system" Pod="whisker-7b66cf4bbd-klt24" WorkloadEndpoint="localhost-k8s-whisker--7b66cf4bbd--klt24-eth0" Nov 4 12:20:57.441126 containerd[1562]: time="2025-11-04T12:20:57.440993843Z" level=info msg="connecting to shim 9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384" address="unix:///run/containerd/s/043d869d919846df08b1fa46245014783cc8d6f51867c76683ebaaf183b9c48e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:20:57.472269 systemd[1]: Started cri-containerd-9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384.scope - libcontainer container 9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384. Nov 4 12:20:57.483795 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:20:57.522524 containerd[1562]: time="2025-11-04T12:20:57.521407889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b66cf4bbd-klt24,Uid:8616b2d0-9f60-46fa-9838-630417416267,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f8e00ae16fedc977ca4031d0ee4ab5afb418ae9eec9a2300d4d3b4cc32e3384\"" Nov 4 12:20:57.524345 containerd[1562]: time="2025-11-04T12:20:57.524294880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 12:20:57.682947 kubelet[2700]: I1104 12:20:57.682916 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 12:20:57.688834 kubelet[2700]: E1104 12:20:57.688810 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:20:57.737102 containerd[1562]: time="2025-11-04T12:20:57.737038156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:20:57.737951 containerd[1562]: time="2025-11-04T12:20:57.737916193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 12:20:57.737951 containerd[1562]: time="2025-11-04T12:20:57.737976513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 12:20:57.738202 kubelet[2700]: E1104 12:20:57.738145 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 12:20:57.740140 kubelet[2700]: E1104 12:20:57.740098 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 12:20:57.742189 kubelet[2700]: E1104 12:20:57.742149 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b66cf4bbd-klt24_calico-system(8616b2d0-9f60-46fa-9838-630417416267): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 12:20:57.743359 containerd[1562]: time="2025-11-04T12:20:57.743324495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 12:20:57.992610 containerd[1562]: time="2025-11-04T12:20:57.992550607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:20:57.993441 containerd[1562]: time="2025-11-04T12:20:57.993397964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 12:20:57.993568 containerd[1562]: time="2025-11-04T12:20:57.993470924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 12:20:57.993656 kubelet[2700]: E1104 12:20:57.993614 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 12:20:57.993717 kubelet[2700]: E1104 12:20:57.993664 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 12:20:57.993798 kubelet[2700]: E1104 12:20:57.993740 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b66cf4bbd-klt24_calico-system(8616b2d0-9f60-46fa-9838-630417416267): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 12:20:57.993834 kubelet[2700]: E1104 12:20:57.993784 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b66cf4bbd-klt24" podUID="8616b2d0-9f60-46fa-9838-630417416267" Nov 4 12:20:58.521220 kubelet[2700]: I1104 12:20:58.521184 2700 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e882339-ed27-44c5-8412-376423a1bb7c" path="/var/lib/kubelet/pods/0e882339-ed27-44c5-8412-376423a1bb7c/volumes" Nov 4 12:20:58.687617 kubelet[2700]: E1104 12:20:58.687000 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b66cf4bbd-klt24" podUID="8616b2d0-9f60-46fa-9838-630417416267" Nov 4 12:20:58.844214 systemd-networkd[1467]: cali2f8aaf64c68: Gained IPv6LL Nov 4 12:20:59.554024 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:57666.service - OpenSSH per-connection server daemon (10.0.0.1:57666). Nov 4 12:20:59.621050 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 57666 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:20:59.622698 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:20:59.628171 systemd-logind[1542]: New session 8 of user core. Nov 4 12:20:59.635249 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 12:20:59.774926 sshd[4077]: Connection closed by 10.0.0.1 port 57666 Nov 4 12:20:59.775238 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Nov 4 12:20:59.779335 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Nov 4 12:20:59.779542 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:57666.service: Deactivated successfully. Nov 4 12:20:59.781989 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 12:20:59.783918 systemd-logind[1542]: Removed session 8. Nov 4 12:21:02.525396 kubelet[2700]: E1104 12:21:02.525263 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:02.526724 containerd[1562]: time="2025-11-04T12:21:02.526504319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzj5x,Uid:2006762c-6423-4745-9e5c-3ba279f65ad7,Namespace:kube-system,Attempt:0,}" Nov 4 12:21:02.630344 systemd-networkd[1467]: cali4810b9227f4: Link UP Nov 4 12:21:02.630629 systemd-networkd[1467]: cali4810b9227f4: Gained carrier Nov 4 12:21:02.642033 containerd[1562]: 2025-11-04 12:21:02.553 [INFO][4140] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 12:21:02.642033 containerd[1562]: 2025-11-04 12:21:02.568 [INFO][4140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--kzj5x-eth0 coredns-66bc5c9577- kube-system 2006762c-6423-4745-9e5c-3ba279f65ad7 875 0 2025-11-04 12:20:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-kzj5x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4810b9227f4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-" Nov 4 12:21:02.642033 containerd[1562]: 2025-11-04 12:21:02.569 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" Nov 4 12:21:02.642033 containerd[1562]: 2025-11-04 12:21:02.592 [INFO][4155] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" HandleID="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Workload="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.592 [INFO][4155] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" HandleID="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Workload="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-kzj5x", "timestamp":"2025-11-04 12:21:02.592363085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.592 [INFO][4155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.592 [INFO][4155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.592 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.601 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" host="localhost" Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.605 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.610 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.612 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.614 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:02.642257 containerd[1562]: 2025-11-04 12:21:02.614 [INFO][4155] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" host="localhost" Nov 4 12:21:02.642449 containerd[1562]: 2025-11-04 12:21:02.615 [INFO][4155] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30 Nov 4 12:21:02.642449 containerd[1562]: 2025-11-04 12:21:02.619 [INFO][4155] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" host="localhost" Nov 4 12:21:02.642449 containerd[1562]: 2025-11-04 12:21:02.625 [INFO][4155] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" host="localhost" Nov 4 12:21:02.642449 containerd[1562]: 2025-11-04 12:21:02.625 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" host="localhost" Nov 4 12:21:02.642449 containerd[1562]: 2025-11-04 12:21:02.625 [INFO][4155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:21:02.642449 containerd[1562]: 2025-11-04 12:21:02.625 [INFO][4155] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" HandleID="k8s-pod-network.37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Workload="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" Nov 4 12:21:02.642548 containerd[1562]: 2025-11-04 12:21:02.627 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kzj5x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2006762c-6423-4745-9e5c-3ba279f65ad7", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-kzj5x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4810b9227f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:02.642548 containerd[1562]: 2025-11-04 12:21:02.627 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" Nov 4 12:21:02.642548 containerd[1562]: 2025-11-04 12:21:02.627 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4810b9227f4 ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" Nov 4 12:21:02.642548 containerd[1562]: 2025-11-04 12:21:02.630 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" Nov 4 12:21:02.642548 containerd[1562]: 2025-11-04 12:21:02.631 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kzj5x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2006762c-6423-4745-9e5c-3ba279f65ad7", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30", Pod:"coredns-66bc5c9577-kzj5x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4810b9227f4", MAC:"fa:ff:8b:71:44:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:02.642548 containerd[1562]: 2025-11-04 12:21:02.640 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" Namespace="kube-system" Pod="coredns-66bc5c9577-kzj5x" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kzj5x-eth0" Nov 4 12:21:02.664555 containerd[1562]: time="2025-11-04T12:21:02.664467673Z" level=info msg="connecting to shim 37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30" address="unix:///run/containerd/s/1bc719aee78900402b6345136c2f8d8ec13d0fa3cb802e3f9841eeed16cbc0ce" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:21:02.689277 systemd[1]: Started cri-containerd-37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30.scope - libcontainer container 37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30. Nov 4 12:21:02.703477 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:21:02.728993 containerd[1562]: time="2025-11-04T12:21:02.728934283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzj5x,Uid:2006762c-6423-4745-9e5c-3ba279f65ad7,Namespace:kube-system,Attempt:0,} returns sandbox id \"37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30\"" Nov 4 12:21:02.730181 kubelet[2700]: E1104 12:21:02.730143 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:02.735481 containerd[1562]: time="2025-11-04T12:21:02.735429664Z" level=info msg="CreateContainer within sandbox \"37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 12:21:02.747888 containerd[1562]: time="2025-11-04T12:21:02.747838468Z" level=info msg="Container 3b9bee1a57dea7e87c2f8954da2d5957f2f0f315f3127f3a5285136ecb0fe890: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:21:02.761985 containerd[1562]: time="2025-11-04T12:21:02.761928346Z" level=info msg="CreateContainer within sandbox \"37eec1e4d7d5be110a08d0c25b2cca6b07dc2a7772cfb7515938f61e12f87e30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b9bee1a57dea7e87c2f8954da2d5957f2f0f315f3127f3a5285136ecb0fe890\"" Nov 4 12:21:02.762572 containerd[1562]: time="2025-11-04T12:21:02.762548784Z" level=info msg="StartContainer for \"3b9bee1a57dea7e87c2f8954da2d5957f2f0f315f3127f3a5285136ecb0fe890\"" Nov 4 12:21:02.763835 containerd[1562]: time="2025-11-04T12:21:02.763753781Z" level=info msg="connecting to shim 3b9bee1a57dea7e87c2f8954da2d5957f2f0f315f3127f3a5285136ecb0fe890" address="unix:///run/containerd/s/1bc719aee78900402b6345136c2f8d8ec13d0fa3cb802e3f9841eeed16cbc0ce" protocol=ttrpc version=3 Nov 4 12:21:02.790236 systemd[1]: Started cri-containerd-3b9bee1a57dea7e87c2f8954da2d5957f2f0f315f3127f3a5285136ecb0fe890.scope - libcontainer container 3b9bee1a57dea7e87c2f8954da2d5957f2f0f315f3127f3a5285136ecb0fe890. Nov 4 12:21:02.824519 containerd[1562]: time="2025-11-04T12:21:02.824454282Z" level=info msg="StartContainer for \"3b9bee1a57dea7e87c2f8954da2d5957f2f0f315f3127f3a5285136ecb0fe890\" returns successfully" Nov 4 12:21:03.521493 containerd[1562]: time="2025-11-04T12:21:03.521436789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvvz6,Uid:5213a2cb-c20a-4f3b-8d44-0dd43d58dc01,Namespace:calico-system,Attempt:0,}" Nov 4 12:21:03.522712 containerd[1562]: time="2025-11-04T12:21:03.522671706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-xr6bn,Uid:b4ff7fc7-ff2d-4f65-af99-cb993f59efe6,Namespace:calico-apiserver,Attempt:0,}" Nov 4 12:21:03.640633 systemd-networkd[1467]: cali9f3820a00bf: Link UP Nov 4 12:21:03.641675 systemd-networkd[1467]: cali9f3820a00bf: Gained carrier Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.550 [INFO][4275] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.572 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0 calico-apiserver-56dfc9fd7- calico-apiserver b4ff7fc7-ff2d-4f65-af99-cb993f59efe6 876 0 2025-11-04 12:20:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56dfc9fd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56dfc9fd7-xr6bn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f3820a00bf [] [] }} ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.572 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.598 [INFO][4303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" HandleID="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Workload="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.599 [INFO][4303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" HandleID="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Workload="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56dfc9fd7-xr6bn", "timestamp":"2025-11-04 12:21:03.598907647 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.599 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.599 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.599 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.611 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.615 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.618 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.620 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.622 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.622 [INFO][4303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.624 [INFO][4303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391 Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.627 [INFO][4303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.632 [INFO][4303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.632 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" host="localhost" Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.632 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:21:03.653696 containerd[1562]: 2025-11-04 12:21:03.632 [INFO][4303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" HandleID="k8s-pod-network.31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Workload="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" Nov 4 12:21:03.654464 containerd[1562]: 2025-11-04 12:21:03.636 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0", GenerateName:"calico-apiserver-56dfc9fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4ff7fc7-ff2d-4f65-af99-cb993f59efe6", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56dfc9fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56dfc9fd7-xr6bn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f3820a00bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:03.654464 containerd[1562]: 2025-11-04 12:21:03.636 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" Nov 4 12:21:03.654464 containerd[1562]: 2025-11-04 12:21:03.636 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f3820a00bf ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" Nov 4 12:21:03.654464 containerd[1562]: 2025-11-04 12:21:03.642 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" Nov 4 12:21:03.654464 containerd[1562]: 2025-11-04 12:21:03.642 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0", GenerateName:"calico-apiserver-56dfc9fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4ff7fc7-ff2d-4f65-af99-cb993f59efe6", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56dfc9fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391", Pod:"calico-apiserver-56dfc9fd7-xr6bn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f3820a00bf", MAC:"ba:47:f9:71:04:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:03.654464 containerd[1562]: 2025-11-04 12:21:03.651 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-xr6bn" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--xr6bn-eth0" Nov 4 12:21:03.670409 containerd[1562]: time="2025-11-04T12:21:03.670369002Z" level=info msg="connecting to shim 31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391" address="unix:///run/containerd/s/1a68902d5b1d264653c73c6e9f629347cefd1f66d704edf8738d936be8931227" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:21:03.695814 kubelet[2700]: E1104 12:21:03.695778 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:03.698233 systemd[1]: Started cri-containerd-31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391.scope - libcontainer container 31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391. Nov 4 12:21:03.720350 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:21:03.724125 kubelet[2700]: I1104 12:21:03.724041 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kzj5x" podStartSLOduration=35.724021888 podStartE2EDuration="35.724021888s" podCreationTimestamp="2025-11-04 12:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:21:03.708902731 +0000 UTC m=+43.262411591" watchObservedRunningTime="2025-11-04 12:21:03.724021888 +0000 UTC m=+43.277530748" Nov 4 12:21:03.761723 containerd[1562]: time="2025-11-04T12:21:03.761609620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-xr6bn,Uid:b4ff7fc7-ff2d-4f65-af99-cb993f59efe6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"31edca66d1157341d46ddddec96e6ba31b6c6e92c0f4e15356eff9b8bb5d1391\"" Nov 4 12:21:03.764101 containerd[1562]: time="2025-11-04T12:21:03.764056653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 12:21:03.768527 systemd-networkd[1467]: cali68532063b41: Link UP Nov 4 12:21:03.769294 systemd-networkd[1467]: cali68532063b41: Gained carrier Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.559 [INFO][4293] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.577 [INFO][4293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mvvz6-eth0 csi-node-driver- calico-system 5213a2cb-c20a-4f3b-8d44-0dd43d58dc01 777 0 2025-11-04 12:20:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mvvz6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali68532063b41 [] [] }} ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.577 [INFO][4293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-eth0" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.607 [INFO][4309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" HandleID="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Workload="localhost-k8s-csi--node--driver--mvvz6-eth0" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.607 [INFO][4309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" HandleID="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Workload="localhost-k8s-csi--node--driver--mvvz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000439720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mvvz6", "timestamp":"2025-11-04 12:21:03.607022024 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.607 [INFO][4309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.632 [INFO][4309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.632 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.712 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.725 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.733 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.736 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.740 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.740 [INFO][4309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.744 [INFO][4309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8 Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.751 [INFO][4309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.759 [INFO][4309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.759 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" host="localhost" Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.759 [INFO][4309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:21:03.782550 containerd[1562]: 2025-11-04 12:21:03.759 [INFO][4309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" HandleID="k8s-pod-network.a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Workload="localhost-k8s-csi--node--driver--mvvz6-eth0" Nov 4 12:21:03.783065 containerd[1562]: 2025-11-04 12:21:03.762 [INFO][4293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvvz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5213a2cb-c20a-4f3b-8d44-0dd43d58dc01", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mvvz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68532063b41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:03.783065 containerd[1562]: 2025-11-04 12:21:03.762 [INFO][4293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-eth0" Nov 4 12:21:03.783065 containerd[1562]: 2025-11-04 12:21:03.762 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68532063b41 ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-eth0" Nov 4 12:21:03.783065 containerd[1562]: 2025-11-04 12:21:03.769 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-eth0" Nov 4 12:21:03.783065 containerd[1562]: 2025-11-04 12:21:03.770 [INFO][4293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvvz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5213a2cb-c20a-4f3b-8d44-0dd43d58dc01", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8", Pod:"csi-node-driver-mvvz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68532063b41", MAC:"f2:83:c3:79:e2:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:03.783065 containerd[1562]: 2025-11-04 12:21:03.779 [INFO][4293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" Namespace="calico-system" Pod="csi-node-driver-mvvz6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvvz6-eth0" Nov 4 12:21:03.818829 containerd[1562]: time="2025-11-04T12:21:03.818655016Z" level=info msg="connecting to shim a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8" address="unix:///run/containerd/s/c1065c59d8f13c5276512e73565757a9a7f2b0e10d26520e7d848328a01d396c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:21:03.843559 systemd[1]: Started cri-containerd-a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8.scope - libcontainer container a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8. Nov 4 12:21:03.864848 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:21:03.896139 containerd[1562]: time="2025-11-04T12:21:03.896071554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvvz6,Uid:5213a2cb-c20a-4f3b-8d44-0dd43d58dc01,Namespace:calico-system,Attempt:0,} returns sandbox id \"a725c7fabf0d35476baf3199941bd31af507122a413a1f3e86d40258c8c555f8\"" Nov 4 12:21:03.983851 containerd[1562]: time="2025-11-04T12:21:03.983772663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:03.984735 containerd[1562]: time="2025-11-04T12:21:03.984687460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 12:21:03.984858 containerd[1562]: time="2025-11-04T12:21:03.984744100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 12:21:03.985110 kubelet[2700]: E1104 12:21:03.985001 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:03.985110 kubelet[2700]: E1104 12:21:03.985050 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:03.985344 kubelet[2700]: E1104 12:21:03.985238 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-56dfc9fd7-xr6bn_calico-apiserver(b4ff7fc7-ff2d-4f65-af99-cb993f59efe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:03.985344 kubelet[2700]: E1104 12:21:03.985285 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" podUID="b4ff7fc7-ff2d-4f65-af99-cb993f59efe6" Nov 4 12:21:03.985576 containerd[1562]: time="2025-11-04T12:21:03.985547498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 12:21:04.476263 systemd-networkd[1467]: cali4810b9227f4: Gained IPv6LL Nov 4 12:21:04.523491 containerd[1562]: time="2025-11-04T12:21:04.523431191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8577ffc656-mj25s,Uid:ec4bc564-6f37-4bcf-aa99-073adb5a7f1c,Namespace:calico-system,Attempt:0,}" Nov 4 12:21:04.524533 containerd[1562]: time="2025-11-04T12:21:04.524504628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-8lpx9,Uid:13e1fa9a-e131-4fe2-8e0a-623c05fa039d,Namespace:calico-apiserver,Attempt:0,}" Nov 4 12:21:04.597039 containerd[1562]: time="2025-11-04T12:21:04.596514827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:04.597518 containerd[1562]: time="2025-11-04T12:21:04.597477264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 12:21:04.598595 kubelet[2700]: E1104 12:21:04.598509 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 12:21:04.598595 kubelet[2700]: E1104 12:21:04.598557 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 12:21:04.598714 kubelet[2700]: E1104 12:21:04.598622 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-mvvz6_calico-system(5213a2cb-c20a-4f3b-8d44-0dd43d58dc01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:04.599171 containerd[1562]: time="2025-11-04T12:21:04.597558584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 12:21:04.599996 containerd[1562]: time="2025-11-04T12:21:04.599969257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 12:21:04.701494 kubelet[2700]: E1104 12:21:04.700612 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:04.702112 kubelet[2700]: E1104 12:21:04.702048 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" podUID="b4ff7fc7-ff2d-4f65-af99-cb993f59efe6" Nov 4 12:21:04.717794 systemd-networkd[1467]: cali9b4c6609e0a: Link UP Nov 4 12:21:04.718073 systemd-networkd[1467]: cali9b4c6609e0a: Gained carrier Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.550 [INFO][4461] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.564 [INFO][4461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0 calico-apiserver-56dfc9fd7- calico-apiserver 13e1fa9a-e131-4fe2-8e0a-623c05fa039d 873 0 2025-11-04 12:20:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56dfc9fd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56dfc9fd7-8lpx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b4c6609e0a [] [] }} ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.564 [INFO][4461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.590 [INFO][4488] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" HandleID="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Workload="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.590 [INFO][4488] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" HandleID="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Workload="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ae0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56dfc9fd7-8lpx9", "timestamp":"2025-11-04 12:21:04.590731523 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.590 [INFO][4488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.590 [INFO][4488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.590 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.603 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.689 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.694 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.696 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.699 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.699 [INFO][4488] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.701 [INFO][4488] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8 Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.704 [INFO][4488] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.712 [INFO][4488] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.712 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" host="localhost" Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.712 [INFO][4488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:21:04.731423 containerd[1562]: 2025-11-04 12:21:04.713 [INFO][4488] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" HandleID="k8s-pod-network.362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Workload="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" Nov 4 12:21:04.732168 containerd[1562]: 2025-11-04 12:21:04.715 [INFO][4461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0", GenerateName:"calico-apiserver-56dfc9fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"13e1fa9a-e131-4fe2-8e0a-623c05fa039d", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56dfc9fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56dfc9fd7-8lpx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4c6609e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:04.732168 containerd[1562]: 2025-11-04 12:21:04.716 [INFO][4461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" Nov 4 12:21:04.732168 containerd[1562]: 2025-11-04 12:21:04.716 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b4c6609e0a ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" Nov 4 12:21:04.732168 containerd[1562]: 2025-11-04 12:21:04.718 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" Nov 4 12:21:04.732168 containerd[1562]: 2025-11-04 12:21:04.719 [INFO][4461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0", GenerateName:"calico-apiserver-56dfc9fd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"13e1fa9a-e131-4fe2-8e0a-623c05fa039d", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56dfc9fd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8", Pod:"calico-apiserver-56dfc9fd7-8lpx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4c6609e0a", MAC:"16:08:68:56:21:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:04.732168 containerd[1562]: 2025-11-04 12:21:04.729 [INFO][4461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" Namespace="calico-apiserver" Pod="calico-apiserver-56dfc9fd7-8lpx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--56dfc9fd7--8lpx9-eth0" Nov 4 12:21:04.754821 containerd[1562]: time="2025-11-04T12:21:04.754751584Z" level=info msg="connecting to shim 362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8" address="unix:///run/containerd/s/116725e344a2b951c0cf7e97824ce4e5d1d79f202ba9ab1d8ee4631cc907eba4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:21:04.777244 systemd[1]: Started cri-containerd-362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8.scope - libcontainer container 362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8. Nov 4 12:21:04.783541 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:57668.service - OpenSSH per-connection server daemon (10.0.0.1:57668). Nov 4 12:21:04.790580 containerd[1562]: time="2025-11-04T12:21:04.790541244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:04.792010 containerd[1562]: time="2025-11-04T12:21:04.791952600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 12:21:04.792581 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:21:04.794003 containerd[1562]: time="2025-11-04T12:21:04.792196119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 12:21:04.794559 kubelet[2700]: E1104 12:21:04.794434 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 12:21:04.794559 kubelet[2700]: E1104 12:21:04.794482 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 12:21:04.794701 kubelet[2700]: E1104 12:21:04.794664 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-mvvz6_calico-system(5213a2cb-c20a-4f3b-8d44-0dd43d58dc01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:04.794953 kubelet[2700]: E1104 12:21:04.794713 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:21:04.796364 systemd-networkd[1467]: cali9f3820a00bf: Gained IPv6LL Nov 4 12:21:04.828853 containerd[1562]: time="2025-11-04T12:21:04.828817136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56dfc9fd7-8lpx9,Uid:13e1fa9a-e131-4fe2-8e0a-623c05fa039d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"362e2db460eb059f326d0a638ec15c0ebc4a69dea8ca63784b9df45e9dd96bb8\"" Nov 4 12:21:04.832008 systemd-networkd[1467]: cali307b75010f5: Link UP Nov 4 12:21:04.833640 containerd[1562]: time="2025-11-04T12:21:04.833603923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 12:21:04.835933 systemd-networkd[1467]: cali307b75010f5: Gained carrier Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.546 [INFO][4451] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.559 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0 calico-kube-controllers-8577ffc656- calico-system ec4bc564-6f37-4bcf-aa99-073adb5a7f1c 872 0 2025-11-04 12:20:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8577ffc656 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8577ffc656-mj25s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali307b75010f5 [] [] }} ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.559 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.596 [INFO][4481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" HandleID="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Workload="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.596 [INFO][4481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" HandleID="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Workload="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8577ffc656-mj25s", "timestamp":"2025-11-04 12:21:04.596820426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.598 [INFO][4481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.712 [INFO][4481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.712 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.729 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.790 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.803 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.806 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.809 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.809 [INFO][4481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.810 [INFO][4481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.815 [INFO][4481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.823 [INFO][4481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.823 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" host="localhost" Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.823 [INFO][4481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:21:04.851106 containerd[1562]: 2025-11-04 12:21:04.823 [INFO][4481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" HandleID="k8s-pod-network.da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Workload="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" Nov 4 12:21:04.851590 containerd[1562]: 2025-11-04 12:21:04.826 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0", GenerateName:"calico-kube-controllers-8577ffc656-", Namespace:"calico-system", SelfLink:"", UID:"ec4bc564-6f37-4bcf-aa99-073adb5a7f1c", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8577ffc656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8577ffc656-mj25s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali307b75010f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:04.851590 containerd[1562]: 2025-11-04 12:21:04.826 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" Nov 4 12:21:04.851590 containerd[1562]: 2025-11-04 12:21:04.826 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali307b75010f5 ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" Nov 4 12:21:04.851590 containerd[1562]: 2025-11-04 12:21:04.836 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" Nov 4 12:21:04.851590 containerd[1562]: 2025-11-04 12:21:04.837 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0", GenerateName:"calico-kube-controllers-8577ffc656-", Namespace:"calico-system", SelfLink:"", UID:"ec4bc564-6f37-4bcf-aa99-073adb5a7f1c", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8577ffc656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d", Pod:"calico-kube-controllers-8577ffc656-mj25s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali307b75010f5", MAC:"4a:fd:2d:6a:6d:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:04.851590 containerd[1562]: 2025-11-04 12:21:04.847 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" Namespace="calico-system" Pod="calico-kube-controllers-8577ffc656-mj25s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8577ffc656--mj25s-eth0" Nov 4 12:21:04.853322 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 57668 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:04.855855 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:04.860592 systemd-logind[1542]: New session 9 of user core. Nov 4 12:21:04.867276 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 12:21:04.878801 containerd[1562]: time="2025-11-04T12:21:04.878730317Z" level=info msg="connecting to shim da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d" address="unix:///run/containerd/s/f389364f1e22c9f03c2c0c40e2149331c946f57c7eef622a094ace9b145da071" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:21:04.885605 kubelet[2700]: I1104 12:21:04.885069 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 12:21:04.885605 kubelet[2700]: E1104 12:21:04.885606 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:04.918259 systemd[1]: Started cri-containerd-da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d.scope - libcontainer container da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d. Nov 4 12:21:04.954473 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:21:05.026732 containerd[1562]: time="2025-11-04T12:21:05.026622104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8577ffc656-mj25s,Uid:ec4bc564-6f37-4bcf-aa99-073adb5a7f1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"da43b98c7b5ad08807f36f0b67fdd6065234660ae92d364cd017b9c6e186879d\"" Nov 4 12:21:05.053011 containerd[1562]: time="2025-11-04T12:21:05.052879313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce\" id:\"3c1a8bd886de9853d7bf1436ab0138718bd8ffb27ecf2b1a2cd0491dcb6a1bd2\" pid:4639 exit_status:1 exited_at:{seconds:1762258865 nanos:52584194}" Nov 4 12:21:05.081919 sshd[4580]: Connection closed by 10.0.0.1 port 57668 Nov 4 12:21:05.082259 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:05.085669 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Nov 4 12:21:05.085811 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:57668.service: Deactivated successfully. Nov 4 12:21:05.088627 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 12:21:05.090789 systemd-logind[1542]: Removed session 9. Nov 4 12:21:05.114682 containerd[1562]: time="2025-11-04T12:21:05.114639344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:05.115634 containerd[1562]: time="2025-11-04T12:21:05.115594501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 12:21:05.116727 kubelet[2700]: E1104 12:21:05.115878 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:05.116928 containerd[1562]: time="2025-11-04T12:21:05.115671021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 12:21:05.117125 kubelet[2700]: E1104 12:21:05.116898 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:05.117166 kubelet[2700]: E1104 12:21:05.117123 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-56dfc9fd7-8lpx9_calico-apiserver(13e1fa9a-e131-4fe2-8e0a-623c05fa039d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:05.117358 containerd[1562]: time="2025-11-04T12:21:05.117326737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 12:21:05.117574 kubelet[2700]: E1104 12:21:05.117537 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" podUID="13e1fa9a-e131-4fe2-8e0a-623c05fa039d" Nov 4 12:21:05.126742 containerd[1562]: time="2025-11-04T12:21:05.126687231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce\" id:\"0f9c607204a5f57c68ecfc952a3df923e7910257f7f0b6dc8c0b7793ecc73cc0\" pid:4681 exit_status:1 exited_at:{seconds:1762258865 nanos:126429912}" Nov 4 12:21:05.351541 kubelet[2700]: I1104 12:21:05.351425 2700 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 12:21:05.351895 kubelet[2700]: E1104 12:21:05.351764 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:05.410617 containerd[1562]: time="2025-11-04T12:21:05.410554655Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:05.412699 containerd[1562]: time="2025-11-04T12:21:05.412645690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 12:21:05.412793 containerd[1562]: time="2025-11-04T12:21:05.412737809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 12:21:05.412917 kubelet[2700]: E1104 12:21:05.412884 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 12:21:05.412977 kubelet[2700]: E1104 12:21:05.412927 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 12:21:05.413043 kubelet[2700]: E1104 12:21:05.413020 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8577ffc656-mj25s_calico-system(ec4bc564-6f37-4bcf-aa99-073adb5a7f1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:05.413165 kubelet[2700]: E1104 12:21:05.413056 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" podUID="ec4bc564-6f37-4bcf-aa99-073adb5a7f1c" Nov 4 12:21:05.436237 systemd-networkd[1467]: cali68532063b41: Gained IPv6LL Nov 4 12:21:05.714188 kubelet[2700]: E1104 12:21:05.714130 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" podUID="ec4bc564-6f37-4bcf-aa99-073adb5a7f1c" Nov 4 12:21:05.715023 kubelet[2700]: E1104 12:21:05.714365 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:05.715023 kubelet[2700]: E1104 12:21:05.714440 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:05.716639 kubelet[2700]: E1104 12:21:05.716478 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" podUID="b4ff7fc7-ff2d-4f65-af99-cb993f59efe6" Nov 4 12:21:05.717263 kubelet[2700]: E1104 12:21:05.717230 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" podUID="13e1fa9a-e131-4fe2-8e0a-623c05fa039d" Nov 4 12:21:05.717473 kubelet[2700]: E1104 12:21:05.717435 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:21:05.919004 systemd-networkd[1467]: vxlan.calico: Link UP Nov 4 12:21:05.919012 systemd-networkd[1467]: vxlan.calico: Gained carrier Nov 4 12:21:06.076196 systemd-networkd[1467]: cali9b4c6609e0a: Gained IPv6LL Nov 4 12:21:06.332247 systemd-networkd[1467]: cali307b75010f5: Gained IPv6LL Nov 4 12:21:06.524294 kubelet[2700]: E1104 12:21:06.524258 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:06.524795 containerd[1562]: time="2025-11-04T12:21:06.524759963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmb25,Uid:84f2fa61-0710-4d49-a317-ce2fe80e0242,Namespace:kube-system,Attempt:0,}" Nov 4 12:21:06.526230 containerd[1562]: time="2025-11-04T12:21:06.526138479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-42zmt,Uid:27d501a2-434d-4c01-adef-352f89d7e050,Namespace:calico-system,Attempt:0,}" Nov 4 12:21:06.717400 kubelet[2700]: E1104 12:21:06.717356 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" podUID="ec4bc564-6f37-4bcf-aa99-073adb5a7f1c" Nov 4 12:21:06.718430 kubelet[2700]: E1104 12:21:06.718404 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" podUID="13e1fa9a-e131-4fe2-8e0a-623c05fa039d" Nov 4 12:21:06.813712 systemd-networkd[1467]: cali4023e525666: Link UP Nov 4 12:21:06.813992 systemd-networkd[1467]: cali4023e525666: Gained carrier Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.579 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dmb25-eth0 coredns-66bc5c9577- kube-system 84f2fa61-0710-4d49-a317-ce2fe80e0242 865 0 2025-11-04 12:20:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dmb25 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4023e525666 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.579 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.603 [INFO][4869] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" HandleID="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Workload="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.603 [INFO][4869] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" HandleID="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Workload="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136dd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dmb25", "timestamp":"2025-11-04 12:21:06.603539472 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.603 [INFO][4869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.603 [INFO][4869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.603 [INFO][4869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.613 [INFO][4869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.617 [INFO][4869] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.625 [INFO][4869] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.627 [INFO][4869] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.629 [INFO][4869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.630 [INFO][4869] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.631 [INFO][4869] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3 Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.647 [INFO][4869] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.805 [INFO][4869] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.805 [INFO][4869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" host="localhost" Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.805 [INFO][4869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:21:06.839071 containerd[1562]: 2025-11-04 12:21:06.805 [INFO][4869] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" HandleID="k8s-pod-network.a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Workload="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" Nov 4 12:21:06.840146 containerd[1562]: 2025-11-04 12:21:06.811 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dmb25-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"84f2fa61-0710-4d49-a317-ce2fe80e0242", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dmb25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4023e525666", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:06.840146 containerd[1562]: 2025-11-04 12:21:06.811 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" Nov 4 12:21:06.840146 containerd[1562]: 2025-11-04 12:21:06.811 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4023e525666 ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" Nov 4 12:21:06.840146 containerd[1562]: 2025-11-04 12:21:06.814 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" Nov 4 12:21:06.840146 containerd[1562]: 2025-11-04 12:21:06.815 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dmb25-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"84f2fa61-0710-4d49-a317-ce2fe80e0242", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3", Pod:"coredns-66bc5c9577-dmb25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4023e525666", MAC:"22:a9:96:db:a0:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:06.840146 containerd[1562]: 2025-11-04 12:21:06.834 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-dmb25" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmb25-eth0" Nov 4 12:21:06.893050 systemd-networkd[1467]: cali8f4f8abb172: Link UP Nov 4 12:21:06.893830 systemd-networkd[1467]: cali8f4f8abb172: Gained carrier Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.579 [INFO][4838] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--42zmt-eth0 goldmane-7c778bb748- calico-system 27d501a2-434d-4c01-adef-352f89d7e050 874 0 2025-11-04 12:20:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-42zmt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8f4f8abb172 [] [] }} ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.580 [INFO][4838] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.610 [INFO][4870] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" HandleID="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Workload="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.610 [INFO][4870] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" HandleID="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Workload="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-42zmt", "timestamp":"2025-11-04 12:21:06.610020775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.610 [INFO][4870] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.805 [INFO][4870] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.806 [INFO][4870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.834 [INFO][4870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.842 [INFO][4870] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.850 [INFO][4870] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.852 [INFO][4870] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.855 [INFO][4870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.855 [INFO][4870] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.870 [INFO][4870] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.877 [INFO][4870] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.887 [INFO][4870] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.887 [INFO][4870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" host="localhost" Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.887 [INFO][4870] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 12:21:06.911212 containerd[1562]: 2025-11-04 12:21:06.887 [INFO][4870] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" HandleID="k8s-pod-network.487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Workload="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" Nov 4 12:21:06.911694 containerd[1562]: 2025-11-04 12:21:06.889 [INFO][4838] cni-plugin/k8s.go 418: Populated endpoint ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--42zmt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"27d501a2-434d-4c01-adef-352f89d7e050", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-42zmt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f4f8abb172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:06.911694 containerd[1562]: 2025-11-04 12:21:06.889 [INFO][4838] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" Nov 4 12:21:06.911694 containerd[1562]: 2025-11-04 12:21:06.889 [INFO][4838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f4f8abb172 ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" Nov 4 12:21:06.911694 containerd[1562]: 2025-11-04 12:21:06.894 [INFO][4838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" Nov 4 12:21:06.911694 containerd[1562]: 2025-11-04 12:21:06.894 [INFO][4838] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--42zmt-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"27d501a2-434d-4c01-adef-352f89d7e050", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 12, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe", Pod:"goldmane-7c778bb748-42zmt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f4f8abb172", MAC:"c6:7f:57:66:3c:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 12:21:06.911694 containerd[1562]: 2025-11-04 12:21:06.905 [INFO][4838] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" Namespace="calico-system" Pod="goldmane-7c778bb748-42zmt" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--42zmt-eth0" Nov 4 12:21:06.911694 containerd[1562]: time="2025-11-04T12:21:06.911156371Z" level=info msg="connecting to shim a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3" address="unix:///run/containerd/s/a4ec21cf9b4281d0ccb09140550eba7c8014f4d280a4210ace1b935ac7fe53e4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:21:06.939796 containerd[1562]: time="2025-11-04T12:21:06.939755294Z" level=info msg="connecting to shim 487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe" address="unix:///run/containerd/s/1b21df4da7a6d34f2554e2c326ae868f28b6226ee097630689c512b743e44111" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:21:06.945398 systemd[1]: Started cri-containerd-a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3.scope - libcontainer container a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3. Nov 4 12:21:06.958592 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:21:06.968256 systemd[1]: Started cri-containerd-487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe.scope - libcontainer container 487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe. Nov 4 12:21:06.981927 containerd[1562]: time="2025-11-04T12:21:06.981883342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmb25,Uid:84f2fa61-0710-4d49-a317-ce2fe80e0242,Namespace:kube-system,Attempt:0,} returns sandbox id \"a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3\"" Nov 4 12:21:06.982607 kubelet[2700]: E1104 12:21:06.982580 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:06.987389 containerd[1562]: time="2025-11-04T12:21:06.987352807Z" level=info msg="CreateContainer within sandbox \"a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 12:21:06.991332 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:21:06.998652 containerd[1562]: time="2025-11-04T12:21:06.998623617Z" level=info msg="Container 5f49c469ea5981399dcfa83d76c749d281c1f3b3949ad85c598eb40307a88aa1: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:21:07.003922 containerd[1562]: time="2025-11-04T12:21:07.003881563Z" level=info msg="CreateContainer within sandbox \"a72e80d7a017bc0cb9584a44257386734bdd03024ac8b0d9136227dee3fee7c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f49c469ea5981399dcfa83d76c749d281c1f3b3949ad85c598eb40307a88aa1\"" Nov 4 12:21:07.004341 containerd[1562]: time="2025-11-04T12:21:07.004315322Z" level=info msg="StartContainer for \"5f49c469ea5981399dcfa83d76c749d281c1f3b3949ad85c598eb40307a88aa1\"" Nov 4 12:21:07.006910 containerd[1562]: time="2025-11-04T12:21:07.006812796Z" level=info msg="connecting to shim 5f49c469ea5981399dcfa83d76c749d281c1f3b3949ad85c598eb40307a88aa1" address="unix:///run/containerd/s/a4ec21cf9b4281d0ccb09140550eba7c8014f4d280a4210ace1b935ac7fe53e4" protocol=ttrpc version=3 Nov 4 12:21:07.015619 containerd[1562]: time="2025-11-04T12:21:07.015560573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-42zmt,Uid:27d501a2-434d-4c01-adef-352f89d7e050,Namespace:calico-system,Attempt:0,} returns sandbox id \"487b5ec9113aa9e6bb3aa9c80db16b171d82dbbbe2c6abdab23bc9051a43ebbe\"" Nov 4 12:21:07.017564 containerd[1562]: time="2025-11-04T12:21:07.017537008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 12:21:07.034244 systemd[1]: Started cri-containerd-5f49c469ea5981399dcfa83d76c749d281c1f3b3949ad85c598eb40307a88aa1.scope - libcontainer container 5f49c469ea5981399dcfa83d76c749d281c1f3b3949ad85c598eb40307a88aa1. Nov 4 12:21:07.037767 systemd-networkd[1467]: vxlan.calico: Gained IPv6LL Nov 4 12:21:07.058526 containerd[1562]: time="2025-11-04T12:21:07.058492901Z" level=info msg="StartContainer for \"5f49c469ea5981399dcfa83d76c749d281c1f3b3949ad85c598eb40307a88aa1\" returns successfully" Nov 4 12:21:07.261310 containerd[1562]: time="2025-11-04T12:21:07.261202491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:07.262275 containerd[1562]: time="2025-11-04T12:21:07.262242408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 12:21:07.262275 containerd[1562]: time="2025-11-04T12:21:07.262298688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 12:21:07.262493 kubelet[2700]: E1104 12:21:07.262443 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 12:21:07.262539 kubelet[2700]: E1104 12:21:07.262501 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 12:21:07.262593 kubelet[2700]: E1104 12:21:07.262575 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-42zmt_calico-system(27d501a2-434d-4c01-adef-352f89d7e050): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:07.262626 kubelet[2700]: E1104 12:21:07.262608 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-42zmt" podUID="27d501a2-434d-4c01-adef-352f89d7e050" Nov 4 12:21:07.721729 kubelet[2700]: E1104 12:21:07.721679 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-42zmt" podUID="27d501a2-434d-4c01-adef-352f89d7e050" Nov 4 12:21:07.730130 kubelet[2700]: E1104 12:21:07.729496 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:07.741570 kubelet[2700]: I1104 12:21:07.741321 2700 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dmb25" podStartSLOduration=39.741304477 podStartE2EDuration="39.741304477s" podCreationTimestamp="2025-11-04 12:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:21:07.740986637 +0000 UTC m=+47.294495497" watchObservedRunningTime="2025-11-04 12:21:07.741304477 +0000 UTC m=+47.294813337" Nov 4 12:21:07.868286 systemd-networkd[1467]: cali4023e525666: Gained IPv6LL Nov 4 12:21:08.316225 systemd-networkd[1467]: cali8f4f8abb172: Gained IPv6LL Nov 4 12:21:08.744023 kubelet[2700]: E1104 12:21:08.743954 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:08.745437 kubelet[2700]: E1104 12:21:08.744900 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-42zmt" podUID="27d501a2-434d-4c01-adef-352f89d7e050" Nov 4 12:21:09.746450 kubelet[2700]: E1104 12:21:09.746073 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:21:10.098494 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:38162.service - OpenSSH per-connection server daemon (10.0.0.1:38162). Nov 4 12:21:10.156336 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 38162 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:10.157684 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:10.161739 systemd-logind[1542]: New session 10 of user core. Nov 4 12:21:10.168259 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 12:21:10.350936 sshd[5041]: Connection closed by 10.0.0.1 port 38162 Nov 4 12:21:10.351225 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:10.360846 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:38162.service: Deactivated successfully. Nov 4 12:21:10.362631 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 12:21:10.363420 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Nov 4 12:21:10.366226 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:38178.service - OpenSSH per-connection server daemon (10.0.0.1:38178). Nov 4 12:21:10.366757 systemd-logind[1542]: Removed session 10. Nov 4 12:21:10.422066 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 38178 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:10.423226 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:10.427261 systemd-logind[1542]: New session 11 of user core. Nov 4 12:21:10.442276 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 12:21:10.653820 sshd[5058]: Connection closed by 10.0.0.1 port 38178 Nov 4 12:21:10.654550 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:10.667696 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:38178.service: Deactivated successfully. Nov 4 12:21:10.673650 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 12:21:10.674364 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Nov 4 12:21:10.680076 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:38184.service - OpenSSH per-connection server daemon (10.0.0.1:38184). Nov 4 12:21:10.680628 systemd-logind[1542]: Removed session 11. Nov 4 12:21:10.743543 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 38184 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:10.744724 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:10.748787 systemd-logind[1542]: New session 12 of user core. Nov 4 12:21:10.758246 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 12:21:10.908798 sshd[5080]: Connection closed by 10.0.0.1 port 38184 Nov 4 12:21:10.909235 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:10.913065 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:38184.service: Deactivated successfully. Nov 4 12:21:10.914800 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 12:21:10.915588 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Nov 4 12:21:10.916607 systemd-logind[1542]: Removed session 12. Nov 4 12:21:12.520744 containerd[1562]: time="2025-11-04T12:21:12.520247065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 12:21:12.743481 containerd[1562]: time="2025-11-04T12:21:12.743429535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:12.744333 containerd[1562]: time="2025-11-04T12:21:12.744297413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 12:21:12.744421 containerd[1562]: time="2025-11-04T12:21:12.744339333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 12:21:12.744636 kubelet[2700]: E1104 12:21:12.744572 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 12:21:12.744636 kubelet[2700]: E1104 12:21:12.744633 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 12:21:12.744977 kubelet[2700]: E1104 12:21:12.744712 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b66cf4bbd-klt24_calico-system(8616b2d0-9f60-46fa-9838-630417416267): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:12.746128 containerd[1562]: time="2025-11-04T12:21:12.746096769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 12:21:12.959291 containerd[1562]: time="2025-11-04T12:21:12.959233503Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:12.963048 containerd[1562]: time="2025-11-04T12:21:12.963000654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 12:21:12.963132 containerd[1562]: time="2025-11-04T12:21:12.963097334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 12:21:12.963284 kubelet[2700]: E1104 12:21:12.963239 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 12:21:12.963355 kubelet[2700]: E1104 12:21:12.963292 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 12:21:12.963407 kubelet[2700]: E1104 12:21:12.963386 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b66cf4bbd-klt24_calico-system(8616b2d0-9f60-46fa-9838-630417416267): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:12.963459 kubelet[2700]: E1104 12:21:12.963432 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b66cf4bbd-klt24" podUID="8616b2d0-9f60-46fa-9838-630417416267" Nov 4 12:21:15.920336 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:38190.service - OpenSSH per-connection server daemon (10.0.0.1:38190). Nov 4 12:21:15.976718 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 38190 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:15.977758 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:15.981823 systemd-logind[1542]: New session 13 of user core. Nov 4 12:21:15.987274 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 12:21:16.111175 sshd[5105]: Connection closed by 10.0.0.1 port 38190 Nov 4 12:21:16.111665 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:16.123269 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:38190.service: Deactivated successfully. Nov 4 12:21:16.125599 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 12:21:16.126313 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Nov 4 12:21:16.128548 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:38202.service - OpenSSH per-connection server daemon (10.0.0.1:38202). Nov 4 12:21:16.129459 systemd-logind[1542]: Removed session 13. Nov 4 12:21:16.176573 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 38202 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:16.177550 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:16.181427 systemd-logind[1542]: New session 14 of user core. Nov 4 12:21:16.189259 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 12:21:16.389145 sshd[5127]: Connection closed by 10.0.0.1 port 38202 Nov 4 12:21:16.389497 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:16.401043 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:38202.service: Deactivated successfully. Nov 4 12:21:16.404124 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 12:21:16.405172 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Nov 4 12:21:16.408922 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:38206.service - OpenSSH per-connection server daemon (10.0.0.1:38206). Nov 4 12:21:16.410562 systemd-logind[1542]: Removed session 14. Nov 4 12:21:16.474276 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 38206 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:16.475628 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:16.479703 systemd-logind[1542]: New session 15 of user core. Nov 4 12:21:16.494298 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 12:21:17.126287 sshd[5142]: Connection closed by 10.0.0.1 port 38206 Nov 4 12:21:17.126859 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:17.137749 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:38206.service: Deactivated successfully. Nov 4 12:21:17.142600 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 12:21:17.143597 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Nov 4 12:21:17.149383 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:38222.service - OpenSSH per-connection server daemon (10.0.0.1:38222). Nov 4 12:21:17.150574 systemd-logind[1542]: Removed session 15. Nov 4 12:21:17.204297 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 38222 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:17.206324 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:17.212588 systemd-logind[1542]: New session 16 of user core. Nov 4 12:21:17.221248 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 12:21:17.518665 sshd[5164]: Connection closed by 10.0.0.1 port 38222 Nov 4 12:21:17.519891 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:17.532815 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:38222.service: Deactivated successfully. Nov 4 12:21:17.537402 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 12:21:17.540012 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Nov 4 12:21:17.542986 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:38228.service - OpenSSH per-connection server daemon (10.0.0.1:38228). Nov 4 12:21:17.543673 systemd-logind[1542]: Removed session 16. Nov 4 12:21:17.597851 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 38228 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:17.599375 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:17.603524 systemd-logind[1542]: New session 17 of user core. Nov 4 12:21:17.613278 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 12:21:17.778031 sshd[5178]: Connection closed by 10.0.0.1 port 38228 Nov 4 12:21:17.778520 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:17.783049 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:38228.service: Deactivated successfully. Nov 4 12:21:17.785052 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 12:21:17.785893 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Nov 4 12:21:17.786840 systemd-logind[1542]: Removed session 17. Nov 4 12:21:18.523574 containerd[1562]: time="2025-11-04T12:21:18.523534307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 12:21:18.801814 containerd[1562]: time="2025-11-04T12:21:18.801615343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:18.803249 containerd[1562]: time="2025-11-04T12:21:18.803175620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 12:21:18.803453 containerd[1562]: time="2025-11-04T12:21:18.803215060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 12:21:18.803503 kubelet[2700]: E1104 12:21:18.803443 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:18.803885 kubelet[2700]: E1104 12:21:18.803508 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:18.803885 kubelet[2700]: E1104 12:21:18.803730 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-56dfc9fd7-xr6bn_calico-apiserver(b4ff7fc7-ff2d-4f65-af99-cb993f59efe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:18.803885 kubelet[2700]: E1104 12:21:18.803780 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" podUID="b4ff7fc7-ff2d-4f65-af99-cb993f59efe6" Nov 4 12:21:18.804307 containerd[1562]: time="2025-11-04T12:21:18.804282018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 12:21:19.045512 containerd[1562]: time="2025-11-04T12:21:19.045430055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:19.046874 containerd[1562]: time="2025-11-04T12:21:19.046737053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 12:21:19.046874 containerd[1562]: time="2025-11-04T12:21:19.046799452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 12:21:19.047164 kubelet[2700]: E1104 12:21:19.047120 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 12:21:19.047272 kubelet[2700]: E1104 12:21:19.047252 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 12:21:19.047429 kubelet[2700]: E1104 12:21:19.047398 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8577ffc656-mj25s_calico-system(ec4bc564-6f37-4bcf-aa99-073adb5a7f1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:19.047561 kubelet[2700]: E1104 12:21:19.047538 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" podUID="ec4bc564-6f37-4bcf-aa99-073adb5a7f1c" Nov 4 12:21:19.519449 containerd[1562]: time="2025-11-04T12:21:19.519371719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 12:21:19.738836 containerd[1562]: time="2025-11-04T12:21:19.738738649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:19.739662 containerd[1562]: time="2025-11-04T12:21:19.739621127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 12:21:19.739744 containerd[1562]: time="2025-11-04T12:21:19.739707247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 12:21:19.739888 kubelet[2700]: E1104 12:21:19.739849 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 12:21:19.739929 kubelet[2700]: E1104 12:21:19.739900 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 12:21:19.739995 kubelet[2700]: E1104 12:21:19.739977 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-42zmt_calico-system(27d501a2-434d-4c01-adef-352f89d7e050): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:19.740054 kubelet[2700]: E1104 12:21:19.740022 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-42zmt" podUID="27d501a2-434d-4c01-adef-352f89d7e050" Nov 4 12:21:20.520188 containerd[1562]: time="2025-11-04T12:21:20.520076466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 12:21:20.739493 containerd[1562]: time="2025-11-04T12:21:20.739446761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:20.740429 containerd[1562]: time="2025-11-04T12:21:20.740391559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 12:21:20.740487 containerd[1562]: time="2025-11-04T12:21:20.740459079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 12:21:20.740590 kubelet[2700]: E1104 12:21:20.740558 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 12:21:20.740804 kubelet[2700]: E1104 12:21:20.740600 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 12:21:20.740804 kubelet[2700]: E1104 12:21:20.740736 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-mvvz6_calico-system(5213a2cb-c20a-4f3b-8d44-0dd43d58dc01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:20.741469 containerd[1562]: time="2025-11-04T12:21:20.741440397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 12:21:20.970527 containerd[1562]: time="2025-11-04T12:21:20.970479752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:20.971546 containerd[1562]: time="2025-11-04T12:21:20.971508270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 12:21:20.971546 containerd[1562]: time="2025-11-04T12:21:20.971575470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 12:21:20.971845 kubelet[2700]: E1104 12:21:20.971789 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:20.971904 kubelet[2700]: E1104 12:21:20.971852 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 12:21:20.972036 kubelet[2700]: E1104 12:21:20.972011 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-56dfc9fd7-8lpx9_calico-apiserver(13e1fa9a-e131-4fe2-8e0a-623c05fa039d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:20.972213 kubelet[2700]: E1104 12:21:20.972187 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" podUID="13e1fa9a-e131-4fe2-8e0a-623c05fa039d" Nov 4 12:21:20.972307 containerd[1562]: time="2025-11-04T12:21:20.972198668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 12:21:21.222232 containerd[1562]: time="2025-11-04T12:21:21.222111824Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 12:21:21.223154 containerd[1562]: time="2025-11-04T12:21:21.223105742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 12:21:21.223266 containerd[1562]: time="2025-11-04T12:21:21.223183902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 12:21:21.223350 kubelet[2700]: E1104 12:21:21.223305 2700 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 12:21:21.223423 kubelet[2700]: E1104 12:21:21.223354 2700 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 12:21:21.223471 kubelet[2700]: E1104 12:21:21.223426 2700 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-mvvz6_calico-system(5213a2cb-c20a-4f3b-8d44-0dd43d58dc01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 12:21:21.223583 kubelet[2700]: E1104 12:21:21.223467 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:21:22.794488 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:38894.service - OpenSSH per-connection server daemon (10.0.0.1:38894). Nov 4 12:21:22.845422 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 38894 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:22.846727 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:22.851168 systemd-logind[1542]: New session 18 of user core. Nov 4 12:21:22.865236 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 12:21:22.992440 sshd[5201]: Connection closed by 10.0.0.1 port 38894 Nov 4 12:21:22.992961 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:22.996629 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Nov 4 12:21:22.996882 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:38894.service: Deactivated successfully. Nov 4 12:21:22.998714 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 12:21:22.999958 systemd-logind[1542]: Removed session 18. Nov 4 12:21:26.521942 kubelet[2700]: E1104 12:21:26.521798 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b66cf4bbd-klt24" podUID="8616b2d0-9f60-46fa-9838-630417416267" Nov 4 12:21:28.011515 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:38900.service - OpenSSH per-connection server daemon (10.0.0.1:38900). Nov 4 12:21:28.071744 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 38900 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:28.072966 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:28.077024 systemd-logind[1542]: New session 19 of user core. Nov 4 12:21:28.089251 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 12:21:28.205216 sshd[5225]: Connection closed by 10.0.0.1 port 38900 Nov 4 12:21:28.205542 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:28.208662 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:38900.service: Deactivated successfully. Nov 4 12:21:28.210369 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 12:21:28.211577 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Nov 4 12:21:28.213455 systemd-logind[1542]: Removed session 19. Nov 4 12:21:30.520860 kubelet[2700]: E1104 12:21:30.520632 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-xr6bn" podUID="b4ff7fc7-ff2d-4f65-af99-cb993f59efe6" Nov 4 12:21:30.520860 kubelet[2700]: E1104 12:21:30.520632 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8577ffc656-mj25s" podUID="ec4bc564-6f37-4bcf-aa99-073adb5a7f1c" Nov 4 12:21:32.519627 kubelet[2700]: E1104 12:21:32.519566 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56dfc9fd7-8lpx9" podUID="13e1fa9a-e131-4fe2-8e0a-623c05fa039d" Nov 4 12:21:33.221824 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:42538.service - OpenSSH per-connection server daemon (10.0.0.1:42538). Nov 4 12:21:33.279096 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 42538 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:21:33.279642 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:21:33.283584 systemd-logind[1542]: New session 20 of user core. Nov 4 12:21:33.294216 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 12:21:33.411850 sshd[5246]: Connection closed by 10.0.0.1 port 42538 Nov 4 12:21:33.409436 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Nov 4 12:21:33.415254 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:42538.service: Deactivated successfully. Nov 4 12:21:33.417072 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 12:21:33.417873 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Nov 4 12:21:33.422226 systemd-logind[1542]: Removed session 20. Nov 4 12:21:33.520288 kubelet[2700]: E1104 12:21:33.519990 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-42zmt" podUID="27d501a2-434d-4c01-adef-352f89d7e050" Nov 4 12:21:33.521179 kubelet[2700]: E1104 12:21:33.520891 2700 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mvvz6" podUID="5213a2cb-c20a-4f3b-8d44-0dd43d58dc01" Nov 4 12:21:35.126162 containerd[1562]: time="2025-11-04T12:21:35.126124642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c9895d37b3598b87588ceab6ac248d43c628fa4f7fbaad9cebd1323a8a790ce\" id:\"9148a34a3cb353b3c821c74cd5f37179612764624760efb1a79cf8e85c90913e\" pid:5271 exited_at:{seconds:1762258895 nanos:125808276}" Nov 4 12:21:35.128687 kubelet[2700]: E1104 12:21:35.128633 2700 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"