Nov 5 23:55:51.794288 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 5 23:55:51.794308 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Nov 5 22:12:41 -00 2025 Nov 5 23:55:51.794318 kernel: KASLR enabled Nov 5 23:55:51.794323 kernel: efi: EFI v2.7 by EDK II Nov 5 23:55:51.794329 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 5 23:55:51.794334 kernel: random: crng init done Nov 5 23:55:51.794341 kernel: secureboot: Secure boot disabled Nov 5 23:55:51.794361 kernel: ACPI: Early table checksum verification disabled Nov 5 23:55:51.794367 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 5 23:55:51.794376 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 5 23:55:51.794381 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794387 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794393 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794398 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794405 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794413 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794419 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794425 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794431 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 23:55:51.794437 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 5 23:55:51.794443 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 23:55:51.794449 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 23:55:51.794455 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 5 23:55:51.794461 kernel: Zone ranges: Nov 5 23:55:51.794467 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 23:55:51.794474 kernel: DMA32 empty Nov 5 23:55:51.794480 kernel: Normal empty Nov 5 23:55:51.794486 kernel: Device empty Nov 5 23:55:51.794492 kernel: Movable zone start for each node Nov 5 23:55:51.794498 kernel: Early memory node ranges Nov 5 23:55:51.794503 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 5 23:55:51.794509 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 5 23:55:51.794515 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 5 23:55:51.794521 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 5 23:55:51.794527 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 5 23:55:51.794533 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 5 23:55:51.794539 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 5 23:55:51.794546 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 5 23:55:51.794552 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 5 23:55:51.794558 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 5 23:55:51.794566 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 5 23:55:51.794573 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 5 23:55:51.794579 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 5 23:55:51.794586 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 23:55:51.794593 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 5 23:55:51.794599 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 5 23:55:51.794605 kernel: psci: probing for conduit method from ACPI. Nov 5 23:55:51.794612 kernel: psci: PSCIv1.1 detected in firmware. Nov 5 23:55:51.794618 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 23:55:51.794624 kernel: psci: Trusted OS migration not required Nov 5 23:55:51.794630 kernel: psci: SMC Calling Convention v1.1 Nov 5 23:55:51.794637 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 5 23:55:51.794643 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 23:55:51.794651 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 23:55:51.794657 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 5 23:55:51.794663 kernel: Detected PIPT I-cache on CPU0 Nov 5 23:55:51.794670 kernel: CPU features: detected: GIC system register CPU interface Nov 5 23:55:51.794676 kernel: CPU features: detected: Spectre-v4 Nov 5 23:55:51.794682 kernel: CPU features: detected: Spectre-BHB Nov 5 23:55:51.794689 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 5 23:55:51.794695 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 5 23:55:51.794701 kernel: CPU features: detected: ARM erratum 1418040 Nov 5 23:55:51.794707 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 5 23:55:51.794714 kernel: alternatives: applying boot alternatives Nov 5 23:55:51.794721 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:55:51.794729 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 23:55:51.794736 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 23:55:51.794742 kernel: Fallback order for Node 0: 0 Nov 5 23:55:51.794748 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 5 23:55:51.794755 kernel: Policy zone: DMA Nov 5 23:55:51.794761 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 23:55:51.794767 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 5 23:55:51.794774 kernel: software IO TLB: area num 4. Nov 5 23:55:51.794780 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 5 23:55:51.794787 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 5 23:55:51.794793 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 23:55:51.794801 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 23:55:51.794808 kernel: rcu: RCU event tracing is enabled. Nov 5 23:55:51.794814 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 23:55:51.794821 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 23:55:51.794827 kernel: Tracing variant of Tasks RCU enabled. Nov 5 23:55:51.794834 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 23:55:51.794840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 23:55:51.794846 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 23:55:51.794853 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 23:55:51.794859 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 23:55:51.794865 kernel: GICv3: 256 SPIs implemented Nov 5 23:55:51.794873 kernel: GICv3: 0 Extended SPIs implemented Nov 5 23:55:51.794880 kernel: Root IRQ handler: gic_handle_irq Nov 5 23:55:51.794886 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 5 23:55:51.794892 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 23:55:51.794898 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 5 23:55:51.794905 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 5 23:55:51.794911 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 5 23:55:51.794918 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 5 23:55:51.794924 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 5 23:55:51.794930 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 5 23:55:51.794937 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 23:55:51.794943 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:55:51.794951 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 5 23:55:51.794958 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 5 23:55:51.794965 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 5 23:55:51.794971 kernel: arm-pv: using stolen time PV Nov 5 23:55:51.794978 kernel: Console: colour dummy device 80x25 Nov 5 23:55:51.794985 kernel: ACPI: Core revision 20240827 Nov 5 23:55:51.794991 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 5 23:55:51.794998 kernel: pid_max: default: 32768 minimum: 301 Nov 5 23:55:51.795005 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 23:55:51.795011 kernel: landlock: Up and running. Nov 5 23:55:51.795019 kernel: SELinux: Initializing. Nov 5 23:55:51.795026 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:55:51.795033 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:55:51.795040 kernel: rcu: Hierarchical SRCU implementation. Nov 5 23:55:51.795046 kernel: rcu: Max phase no-delay instances is 400. Nov 5 23:55:51.795053 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 23:55:51.795059 kernel: Remapping and enabling EFI services. Nov 5 23:55:51.795066 kernel: smp: Bringing up secondary CPUs ... Nov 5 23:55:51.795073 kernel: Detected PIPT I-cache on CPU1 Nov 5 23:55:51.795085 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 5 23:55:51.795091 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 5 23:55:51.795099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:55:51.795107 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 5 23:55:51.795114 kernel: Detected PIPT I-cache on CPU2 Nov 5 23:55:51.795121 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 5 23:55:51.795128 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 5 23:55:51.795135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:55:51.795150 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 5 23:55:51.795158 kernel: Detected PIPT I-cache on CPU3 Nov 5 23:55:51.795165 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 5 23:55:51.795172 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 5 23:55:51.795179 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 23:55:51.795185 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 5 23:55:51.795192 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 23:55:51.795200 kernel: SMP: Total of 4 processors activated. Nov 5 23:55:51.795206 kernel: CPU: All CPU(s) started at EL1 Nov 5 23:55:51.795215 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 23:55:51.795222 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 5 23:55:51.795229 kernel: CPU features: detected: Common not Private translations Nov 5 23:55:51.795236 kernel: CPU features: detected: CRC32 instructions Nov 5 23:55:51.795243 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 5 23:55:51.795250 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 5 23:55:51.795257 kernel: CPU features: detected: LSE atomic instructions Nov 5 23:55:51.795263 kernel: CPU features: detected: Privileged Access Never Nov 5 23:55:51.795270 kernel: CPU features: detected: RAS Extension Support Nov 5 23:55:51.795278 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 5 23:55:51.795285 kernel: alternatives: applying system-wide alternatives Nov 5 23:55:51.795292 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 5 23:55:51.795299 kernel: Memory: 2424416K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125536K reserved, 16384K cma-reserved) Nov 5 23:55:51.795306 kernel: devtmpfs: initialized Nov 5 23:55:51.795313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 23:55:51.795320 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 23:55:51.795327 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 5 23:55:51.795334 kernel: 0 pages in range for non-PLT usage Nov 5 23:55:51.795342 kernel: 508560 pages in range for PLT usage Nov 5 23:55:51.795357 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 23:55:51.795364 kernel: SMBIOS 3.0.0 present. Nov 5 23:55:51.795370 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 5 23:55:51.795377 kernel: DMI: Memory slots populated: 1/1 Nov 5 23:55:51.795384 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 23:55:51.795391 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 23:55:51.795398 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 23:55:51.795405 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 23:55:51.795414 kernel: audit: initializing netlink subsys (disabled) Nov 5 23:55:51.795421 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Nov 5 23:55:51.795428 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 23:55:51.795434 kernel: cpuidle: using governor menu Nov 5 23:55:51.795441 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 23:55:51.795448 kernel: ASID allocator initialised with 32768 entries Nov 5 23:55:51.795455 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 23:55:51.795462 kernel: Serial: AMBA PL011 UART driver Nov 5 23:55:51.795469 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 23:55:51.795477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 23:55:51.795484 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 23:55:51.795491 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 23:55:51.795498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 23:55:51.795505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 23:55:51.795512 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 23:55:51.795519 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 23:55:51.795525 kernel: ACPI: Added _OSI(Module Device) Nov 5 23:55:51.795532 kernel: ACPI: Added _OSI(Processor Device) Nov 5 23:55:51.795540 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 23:55:51.795547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 23:55:51.795554 kernel: ACPI: Interpreter enabled Nov 5 23:55:51.795561 kernel: ACPI: Using GIC for interrupt routing Nov 5 23:55:51.795568 kernel: ACPI: MCFG table detected, 1 entries Nov 5 23:55:51.795575 kernel: ACPI: CPU0 has been hot-added Nov 5 23:55:51.795582 kernel: ACPI: CPU1 has been hot-added Nov 5 23:55:51.795589 kernel: ACPI: CPU2 has been hot-added Nov 5 23:55:51.795596 kernel: ACPI: CPU3 has been hot-added Nov 5 23:55:51.795603 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 5 23:55:51.795612 kernel: printk: legacy console [ttyAMA0] enabled Nov 5 23:55:51.795619 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 23:55:51.795774 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 23:55:51.795859 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 23:55:51.795921 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 23:55:51.795979 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 5 23:55:51.796036 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 5 23:55:51.796048 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 5 23:55:51.796055 kernel: PCI host bridge to bus 0000:00 Nov 5 23:55:51.796124 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 5 23:55:51.796188 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 23:55:51.796242 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 5 23:55:51.796292 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 23:55:51.796385 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 5 23:55:51.796461 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 23:55:51.796521 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 5 23:55:51.796580 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 5 23:55:51.796640 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 5 23:55:51.796700 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 5 23:55:51.796767 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 5 23:55:51.796833 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 5 23:55:51.796891 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 5 23:55:51.796959 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 23:55:51.797012 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 5 23:55:51.797022 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 23:55:51.797029 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 23:55:51.797036 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 23:55:51.797043 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 23:55:51.797051 kernel: iommu: Default domain type: Translated Nov 5 23:55:51.797059 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 23:55:51.797066 kernel: efivars: Registered efivars operations Nov 5 23:55:51.797073 kernel: vgaarb: loaded Nov 5 23:55:51.797079 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 23:55:51.797087 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 23:55:51.797094 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 23:55:51.797101 kernel: pnp: PnP ACPI init Nov 5 23:55:51.797175 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 5 23:55:51.797188 kernel: pnp: PnP ACPI: found 1 devices Nov 5 23:55:51.797195 kernel: NET: Registered PF_INET protocol family Nov 5 23:55:51.797202 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 23:55:51.797210 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 23:55:51.797217 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 23:55:51.797224 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 23:55:51.797231 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 23:55:51.797238 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 23:55:51.797246 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:55:51.797253 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:55:51.797260 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 23:55:51.797267 kernel: PCI: CLS 0 bytes, default 64 Nov 5 23:55:51.797274 kernel: kvm [1]: HYP mode not available Nov 5 23:55:51.797281 kernel: Initialise system trusted keyrings Nov 5 23:55:51.797288 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 23:55:51.797295 kernel: Key type asymmetric registered Nov 5 23:55:51.797302 kernel: Asymmetric key parser 'x509' registered Nov 5 23:55:51.797310 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 23:55:51.797317 kernel: io scheduler mq-deadline registered Nov 5 23:55:51.797324 kernel: io scheduler kyber registered Nov 5 23:55:51.797357 kernel: io scheduler bfq registered Nov 5 23:55:51.797366 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 23:55:51.797373 kernel: ACPI: button: Power Button [PWRB] Nov 5 23:55:51.797381 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 23:55:51.797447 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 5 23:55:51.797456 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 23:55:51.797466 kernel: thunder_xcv, ver 1.0 Nov 5 23:55:51.797473 kernel: thunder_bgx, ver 1.0 Nov 5 23:55:51.797479 kernel: nicpf, ver 1.0 Nov 5 23:55:51.797486 kernel: nicvf, ver 1.0 Nov 5 23:55:51.797554 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 23:55:51.797610 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T23:55:51 UTC (1762386951) Nov 5 23:55:51.797619 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 23:55:51.797626 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 5 23:55:51.797635 kernel: watchdog: NMI not fully supported Nov 5 23:55:51.797642 kernel: watchdog: Hard watchdog permanently disabled Nov 5 23:55:51.797649 kernel: NET: Registered PF_INET6 protocol family Nov 5 23:55:51.797656 kernel: Segment Routing with IPv6 Nov 5 23:55:51.797662 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 23:55:51.797669 kernel: NET: Registered PF_PACKET protocol family Nov 5 23:55:51.797677 kernel: Key type dns_resolver registered Nov 5 23:55:51.797684 kernel: registered taskstats version 1 Nov 5 23:55:51.797690 kernel: Loading compiled-in X.509 certificates Nov 5 23:55:51.797697 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9d5732f5af196e4cfd06fc38e62e061c2a702dfd' Nov 5 23:55:51.797706 kernel: Demotion targets for Node 0: null Nov 5 23:55:51.797712 kernel: Key type .fscrypt registered Nov 5 23:55:51.797719 kernel: Key type fscrypt-provisioning registered Nov 5 23:55:51.797726 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 23:55:51.797733 kernel: ima: Allocated hash algorithm: sha1 Nov 5 23:55:51.797739 kernel: ima: No architecture policies found Nov 5 23:55:51.797747 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 23:55:51.797754 kernel: clk: Disabling unused clocks Nov 5 23:55:51.797760 kernel: PM: genpd: Disabling unused power domains Nov 5 23:55:51.797768 kernel: Warning: unable to open an initial console. Nov 5 23:55:51.797775 kernel: Freeing unused kernel memory: 38976K Nov 5 23:55:51.797782 kernel: Run /init as init process Nov 5 23:55:51.797789 kernel: with arguments: Nov 5 23:55:51.797796 kernel: /init Nov 5 23:55:51.797802 kernel: with environment: Nov 5 23:55:51.797809 kernel: HOME=/ Nov 5 23:55:51.797816 kernel: TERM=linux Nov 5 23:55:51.797824 systemd[1]: Successfully made /usr/ read-only. Nov 5 23:55:51.797835 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:55:51.797843 systemd[1]: Detected virtualization kvm. Nov 5 23:55:51.797850 systemd[1]: Detected architecture arm64. Nov 5 23:55:51.797857 systemd[1]: Running in initrd. Nov 5 23:55:51.797865 systemd[1]: No hostname configured, using default hostname. Nov 5 23:55:51.797872 systemd[1]: Hostname set to . Nov 5 23:55:51.797879 systemd[1]: Initializing machine ID from VM UUID. Nov 5 23:55:51.797888 systemd[1]: Queued start job for default target initrd.target. Nov 5 23:55:51.797895 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:55:51.797903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:55:51.797911 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 23:55:51.797919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:55:51.797926 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 23:55:51.797934 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 23:55:51.797944 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 5 23:55:51.797951 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 5 23:55:51.797959 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:55:51.797966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:55:51.797974 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:55:51.797981 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:55:51.797989 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:55:51.797996 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:55:51.798005 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:55:51.798012 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:55:51.798020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 23:55:51.798027 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 23:55:51.798035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:55:51.798042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:55:51.798050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:55:51.798057 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:55:51.798065 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 23:55:51.798073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:55:51.798080 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 23:55:51.798088 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 23:55:51.798095 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 23:55:51.798103 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:55:51.798110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:55:51.798118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:55:51.798125 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 23:55:51.798134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:55:51.798150 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 23:55:51.798173 systemd-journald[245]: Collecting audit messages is disabled. Nov 5 23:55:51.798194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 23:55:51.798202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:55:51.798210 systemd-journald[245]: Journal started Nov 5 23:55:51.798229 systemd-journald[245]: Runtime Journal (/run/log/journal/67fe830be471455197cac8de76d3cf5e) is 6M, max 48.5M, 42.4M free. Nov 5 23:55:51.787956 systemd-modules-load[246]: Inserted module 'overlay' Nov 5 23:55:51.801381 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:55:51.804371 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 23:55:51.806297 systemd-modules-load[246]: Inserted module 'br_netfilter' Nov 5 23:55:51.807218 kernel: Bridge firewalling registered Nov 5 23:55:51.809998 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 23:55:51.811763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:55:51.815990 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:55:51.817995 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:55:51.828062 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:55:51.829753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:55:51.836172 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 23:55:51.839669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:55:51.843768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:55:51.845036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:55:51.849472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:55:51.850723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:55:51.853601 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 23:55:51.878994 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:55:51.894701 systemd-resolved[291]: Positive Trust Anchors: Nov 5 23:55:51.894724 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:55:51.894754 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:55:51.899563 systemd-resolved[291]: Defaulting to hostname 'linux'. Nov 5 23:55:51.900983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:55:51.905629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:55:51.956367 kernel: SCSI subsystem initialized Nov 5 23:55:51.960365 kernel: Loading iSCSI transport class v2.0-870. Nov 5 23:55:51.968386 kernel: iscsi: registered transport (tcp) Nov 5 23:55:51.981396 kernel: iscsi: registered transport (qla4xxx) Nov 5 23:55:51.981435 kernel: QLogic iSCSI HBA Driver Nov 5 23:55:51.997505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:55:52.010640 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:55:52.014242 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:55:52.060946 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 23:55:52.063113 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 23:55:52.121390 kernel: raid6: neonx8 gen() 16340 MB/s Nov 5 23:55:52.138382 kernel: raid6: neonx4 gen() 15818 MB/s Nov 5 23:55:52.155376 kernel: raid6: neonx2 gen() 13261 MB/s Nov 5 23:55:52.172379 kernel: raid6: neonx1 gen() 10425 MB/s Nov 5 23:55:52.189377 kernel: raid6: int64x8 gen() 6897 MB/s Nov 5 23:55:52.206385 kernel: raid6: int64x4 gen() 7350 MB/s Nov 5 23:55:52.223391 kernel: raid6: int64x2 gen() 6102 MB/s Nov 5 23:55:52.240690 kernel: raid6: int64x1 gen() 5039 MB/s Nov 5 23:55:52.240714 kernel: raid6: using algorithm neonx8 gen() 16340 MB/s Nov 5 23:55:52.258615 kernel: raid6: .... xor() 12066 MB/s, rmw enabled Nov 5 23:55:52.258636 kernel: raid6: using neon recovery algorithm Nov 5 23:55:52.264899 kernel: xor: measuring software checksum speed Nov 5 23:55:52.264923 kernel: 8regs : 21636 MB/sec Nov 5 23:55:52.264933 kernel: 32regs : 21653 MB/sec Nov 5 23:55:52.265580 kernel: arm64_neon : 27974 MB/sec Nov 5 23:55:52.265596 kernel: xor: using function: arm64_neon (27974 MB/sec) Nov 5 23:55:52.321381 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 23:55:52.327990 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:55:52.330732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:55:52.364365 systemd-udevd[503]: Using default interface naming scheme 'v255'. Nov 5 23:55:52.369071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:55:52.371498 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 23:55:52.403388 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Nov 5 23:55:52.425901 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:55:52.428382 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:55:52.483280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:55:52.486582 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 23:55:52.543385 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 5 23:55:52.543578 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 5 23:55:52.551571 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:55:52.554826 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 23:55:52.554845 kernel: GPT:9289727 != 19775487 Nov 5 23:55:52.554854 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 23:55:52.554863 kernel: GPT:9289727 != 19775487 Nov 5 23:55:52.554872 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 23:55:52.554880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 23:55:52.551692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:55:52.557297 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:55:52.559455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:55:52.587392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:55:52.595338 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 23:55:52.596812 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 23:55:52.605802 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 23:55:52.618045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 23:55:52.624194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 5 23:55:52.625554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 23:55:52.628809 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:55:52.631319 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:55:52.633691 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:55:52.637416 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 23:55:52.639983 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 23:55:52.662031 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:55:52.664634 disk-uuid[594]: Primary Header is updated. Nov 5 23:55:52.664634 disk-uuid[594]: Secondary Entries is updated. Nov 5 23:55:52.664634 disk-uuid[594]: Secondary Header is updated. Nov 5 23:55:52.668186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 23:55:53.675401 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 23:55:53.676173 disk-uuid[602]: The operation has completed successfully. Nov 5 23:55:53.702037 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 23:55:53.702163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 23:55:53.729970 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 5 23:55:53.742998 sh[613]: Success Nov 5 23:55:53.756413 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 23:55:53.756450 kernel: device-mapper: uevent: version 1.0.3 Nov 5 23:55:53.756460 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 23:55:53.763375 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 23:55:53.786778 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 5 23:55:53.789440 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 5 23:55:53.814451 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 5 23:55:53.822053 kernel: BTRFS: device fsid 223300c7-37a4-4131-896a-4d331c3aa134 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (625) Nov 5 23:55:53.822077 kernel: BTRFS info (device dm-0): first mount of filesystem 223300c7-37a4-4131-896a-4d331c3aa134 Nov 5 23:55:53.822093 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:55:53.826370 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 23:55:53.826392 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 23:55:53.828077 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 5 23:55:53.829380 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:55:53.830935 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 23:55:53.831641 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 23:55:53.833365 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 23:55:53.854365 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Nov 5 23:55:53.856651 kernel: BTRFS info (device vda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:55:53.856684 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:55:53.859582 kernel: BTRFS info (device vda6): turning on async discard Nov 5 23:55:53.859612 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 23:55:53.864388 kernel: BTRFS info (device vda6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:55:53.867369 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 23:55:53.869179 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 23:55:53.928021 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:55:53.931242 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:55:53.973870 ignition[711]: Ignition 2.22.0 Nov 5 23:55:53.973883 ignition[711]: Stage: fetch-offline Nov 5 23:55:53.973909 ignition[711]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:55:53.973916 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:55:53.979069 systemd-networkd[803]: lo: Link UP Nov 5 23:55:53.973989 ignition[711]: parsed url from cmdline: "" Nov 5 23:55:53.979072 systemd-networkd[803]: lo: Gained carrier Nov 5 23:55:53.973992 ignition[711]: no config URL provided Nov 5 23:55:53.979788 systemd-networkd[803]: Enumeration completed Nov 5 23:55:53.973996 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 23:55:53.979886 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:55:53.974002 ignition[711]: no config at "/usr/lib/ignition/user.ign" Nov 5 23:55:53.980190 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:55:53.974022 ignition[711]: op(1): [started] loading QEMU firmware config module Nov 5 23:55:53.980194 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:55:53.974028 ignition[711]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 23:55:53.980867 systemd-networkd[803]: eth0: Link UP Nov 5 23:55:53.978837 ignition[711]: op(1): [finished] loading QEMU firmware config module Nov 5 23:55:53.981207 systemd-networkd[803]: eth0: Gained carrier Nov 5 23:55:53.981217 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:55:53.981781 systemd[1]: Reached target network.target - Network. Nov 5 23:55:53.994385 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 23:55:54.040452 ignition[711]: parsing config with SHA512: b9bf7537eb485a856bd103fd7fbc3449eedf52965fa39adc986b1742bca1d0137b53b225f17a734a6d1f8b2887df1e75ee95b17cc5405e5b9a64d978911ce360 Nov 5 23:55:54.045490 unknown[711]: fetched base config from "system" Nov 5 23:55:54.045502 unknown[711]: fetched user config from "qemu" Nov 5 23:55:54.046364 ignition[711]: fetch-offline: fetch-offline passed Nov 5 23:55:54.048143 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:55:54.046433 ignition[711]: Ignition finished successfully Nov 5 23:55:54.049682 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 23:55:54.050473 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 23:55:54.084873 ignition[813]: Ignition 2.22.0 Nov 5 23:55:54.084893 ignition[813]: Stage: kargs Nov 5 23:55:54.085014 ignition[813]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:55:54.085023 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:55:54.085742 ignition[813]: kargs: kargs passed Nov 5 23:55:54.087823 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 23:55:54.085783 ignition[813]: Ignition finished successfully Nov 5 23:55:54.090320 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 23:55:54.117621 ignition[821]: Ignition 2.22.0 Nov 5 23:55:54.117641 ignition[821]: Stage: disks Nov 5 23:55:54.117760 ignition[821]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:55:54.117768 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:55:54.118480 ignition[821]: disks: disks passed Nov 5 23:55:54.120957 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 23:55:54.118522 ignition[821]: Ignition finished successfully Nov 5 23:55:54.122376 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 23:55:54.123863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 23:55:54.125927 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:55:54.127618 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:55:54.129648 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:55:54.132575 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 23:55:54.155884 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 5 23:55:54.160918 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 23:55:54.163215 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 23:55:54.239375 kernel: EXT4-fs (vda9): mounted filesystem de3d89fd-ab21-4d05-b3c1-f0d3e7ce9725 r/w with ordered data mode. Quota mode: none. Nov 5 23:55:54.239474 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 23:55:54.240745 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 23:55:54.243323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:55:54.244981 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 23:55:54.246128 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 23:55:54.246179 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 23:55:54.246203 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:55:54.258809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 23:55:54.260880 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 23:55:54.268364 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Nov 5 23:55:54.270991 kernel: BTRFS info (device vda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:55:54.271009 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:55:54.274071 kernel: BTRFS info (device vda6): turning on async discard Nov 5 23:55:54.274088 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 23:55:54.275433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:55:54.296313 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 23:55:54.300378 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Nov 5 23:55:54.304076 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 23:55:54.308042 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 23:55:54.370382 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 23:55:54.372612 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 23:55:54.375555 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 23:55:54.391367 kernel: BTRFS info (device vda6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:55:54.402239 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 23:55:54.411672 ignition[952]: INFO : Ignition 2.22.0 Nov 5 23:55:54.411672 ignition[952]: INFO : Stage: mount Nov 5 23:55:54.414415 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:55:54.414415 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:55:54.414415 ignition[952]: INFO : mount: mount passed Nov 5 23:55:54.414415 ignition[952]: INFO : Ignition finished successfully Nov 5 23:55:54.414646 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 23:55:54.417458 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 23:55:54.820254 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 23:55:54.821721 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:55:54.843450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Nov 5 23:55:54.843485 kernel: BTRFS info (device vda6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:55:54.843496 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:55:54.847807 kernel: BTRFS info (device vda6): turning on async discard Nov 5 23:55:54.847831 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 23:55:54.849575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:55:54.884816 ignition[982]: INFO : Ignition 2.22.0 Nov 5 23:55:54.884816 ignition[982]: INFO : Stage: files Nov 5 23:55:54.886703 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:55:54.886703 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:55:54.886703 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Nov 5 23:55:54.890448 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 23:55:54.890448 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 23:55:54.890448 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 23:55:54.890448 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 23:55:54.890448 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 23:55:54.890448 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:55:54.890448 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 5 23:55:54.888894 unknown[982]: wrote ssh authorized keys file for user: core Nov 5 23:55:54.944257 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 23:55:55.053262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:55:55.053262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 23:55:55.053262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 23:55:55.053262 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 23:55:55.061169 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 5 23:55:55.120474 systemd-networkd[803]: eth0: Gained IPv6LL Nov 5 23:55:55.439236 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 23:55:55.651187 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 23:55:55.651187 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 23:55:55.655160 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 23:55:55.671404 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 23:55:55.675503 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 23:55:55.678482 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 23:55:55.678482 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 23:55:55.678482 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 23:55:55.678482 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:55:55.678482 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:55:55.678482 ignition[982]: INFO : files: files passed Nov 5 23:55:55.678482 ignition[982]: INFO : Ignition finished successfully Nov 5 23:55:55.679108 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 23:55:55.682031 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 23:55:55.685485 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 23:55:55.700739 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 23:55:55.700869 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 23:55:55.704563 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 23:55:55.706034 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:55:55.706034 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:55:55.709435 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:55:55.709596 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:55:55.712451 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 23:55:55.714387 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 23:55:55.752499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 23:55:55.753645 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 23:55:55.755172 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 23:55:55.757293 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 23:55:55.759264 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 23:55:55.760190 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 23:55:55.775234 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:55:55.778042 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 23:55:55.796915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:55:55.798404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:55:55.800636 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 23:55:55.802521 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 23:55:55.802665 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:55:55.805418 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 23:55:55.807690 systemd[1]: Stopped target basic.target - Basic System. Nov 5 23:55:55.809410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 23:55:55.811329 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:55:55.813551 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 23:55:55.815590 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:55:55.817657 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 23:55:55.819606 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:55:55.821739 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 23:55:55.823842 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 23:55:55.825780 systemd[1]: Stopped target swap.target - Swaps. Nov 5 23:55:55.827412 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 23:55:55.827557 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:55:55.830176 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:55:55.832325 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:55:55.834391 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 23:55:55.835441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:55:55.836811 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 23:55:55.836954 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 23:55:55.839959 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 23:55:55.840099 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:55:55.842217 systemd[1]: Stopped target paths.target - Path Units. Nov 5 23:55:55.844084 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 23:55:55.844224 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:55:55.846364 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 23:55:55.848421 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 23:55:55.850152 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 23:55:55.850247 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:55:55.852114 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 23:55:55.852208 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:55:55.854558 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 23:55:55.854690 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:55:55.856594 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 23:55:55.856711 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 23:55:55.859311 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 23:55:55.861102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 23:55:55.861258 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:55:55.886045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 23:55:55.887063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 23:55:55.887245 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:55:55.889426 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 23:55:55.889546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:55:55.895964 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 23:55:55.897377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 23:55:55.903876 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 23:55:55.907370 ignition[1038]: INFO : Ignition 2.22.0 Nov 5 23:55:55.907370 ignition[1038]: INFO : Stage: umount Nov 5 23:55:55.909342 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:55:55.909342 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 23:55:55.909342 ignition[1038]: INFO : umount: umount passed Nov 5 23:55:55.909342 ignition[1038]: INFO : Ignition finished successfully Nov 5 23:55:55.911087 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 23:55:55.911205 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 23:55:55.913039 systemd[1]: Stopped target network.target - Network. Nov 5 23:55:55.914914 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 23:55:55.914986 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 23:55:55.920595 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 23:55:55.920656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 23:55:55.922549 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 23:55:55.922612 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 23:55:55.924556 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 23:55:55.924607 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 23:55:55.926597 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 23:55:55.928525 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 23:55:55.930736 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 23:55:55.930862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 23:55:55.932878 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 23:55:55.932978 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 23:55:55.934885 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 23:55:55.936462 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 23:55:55.940867 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 5 23:55:55.941567 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 23:55:55.941653 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:55:55.945776 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:55:55.946006 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 23:55:55.948159 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 23:55:55.952957 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 5 23:55:55.953487 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 23:55:55.955974 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 23:55:55.956015 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:55:55.959384 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 23:55:55.960479 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 23:55:55.960543 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:55:55.963172 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 23:55:55.963222 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:55:55.966389 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 23:55:55.966435 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 23:55:55.968881 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:55:55.973080 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 5 23:55:55.987268 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 23:55:55.987567 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 23:55:55.989794 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 23:55:55.989943 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:55:55.992299 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 23:55:55.992394 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 23:55:55.993874 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 23:55:55.993907 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:55:55.996246 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 23:55:55.996301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:55:55.999470 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 23:55:55.999524 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 23:55:56.002603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 23:55:56.002668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:55:56.006691 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 23:55:56.008049 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 23:55:56.008114 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:55:56.011392 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 23:55:56.011439 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:55:56.015379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:55:56.015432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:55:56.019762 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 23:55:56.019865 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 23:55:56.022239 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 23:55:56.025105 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 23:55:56.039253 systemd[1]: Switching root. Nov 5 23:55:56.070175 systemd-journald[245]: Journal stopped Nov 5 23:55:56.897890 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 5 23:55:56.897942 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 23:55:56.897955 kernel: SELinux: policy capability open_perms=1 Nov 5 23:55:56.897965 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 23:55:56.897978 kernel: SELinux: policy capability always_check_network=0 Nov 5 23:55:56.897988 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 23:55:56.897998 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 23:55:56.898007 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 23:55:56.898020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 23:55:56.898029 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 23:55:56.898038 kernel: audit: type=1403 audit(1762386956.259:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 23:55:56.898052 systemd[1]: Successfully loaded SELinux policy in 66.305ms. Nov 5 23:55:56.898071 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.630ms. Nov 5 23:55:56.898082 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:55:56.898093 systemd[1]: Detected virtualization kvm. Nov 5 23:55:56.898104 systemd[1]: Detected architecture arm64. Nov 5 23:55:56.898114 systemd[1]: Detected first boot. Nov 5 23:55:56.898135 systemd[1]: Initializing machine ID from VM UUID. Nov 5 23:55:56.898147 zram_generator::config[1084]: No configuration found. Nov 5 23:55:56.898157 kernel: NET: Registered PF_VSOCK protocol family Nov 5 23:55:56.898166 systemd[1]: Populated /etc with preset unit settings. Nov 5 23:55:56.898178 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 5 23:55:56.898188 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 23:55:56.898200 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 23:55:56.898210 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 23:55:56.898220 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 23:55:56.898230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 23:55:56.898240 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 23:55:56.898249 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 23:55:56.898259 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 23:55:56.898269 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 23:55:56.898280 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 23:55:56.898290 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 23:55:56.898300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:55:56.898310 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:55:56.898320 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 23:55:56.898329 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 23:55:56.898339 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 23:55:56.898363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:55:56.898374 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 5 23:55:56.898390 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:55:56.898400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:55:56.898411 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 23:55:56.898421 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 23:55:56.898430 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 23:55:56.898440 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 23:55:56.898450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:55:56.898459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:55:56.898471 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:55:56.898481 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:55:56.898490 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 23:55:56.898503 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 23:55:56.898513 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 23:55:56.898522 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:55:56.898532 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:55:56.898541 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:55:56.898551 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 23:55:56.898562 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 23:55:56.898572 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 23:55:56.898581 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 23:55:56.898591 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 23:55:56.898601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 23:55:56.898610 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 23:55:56.898621 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 23:55:56.898632 systemd[1]: Reached target machines.target - Containers. Nov 5 23:55:56.898642 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 23:55:56.898653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:55:56.898663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:55:56.898672 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 23:55:56.898682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:55:56.898692 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:55:56.898702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:55:56.898711 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 23:55:56.898721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:55:56.898733 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 23:55:56.898742 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 23:55:56.898752 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 23:55:56.898761 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 23:55:56.898771 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 23:55:56.898780 kernel: loop: module loaded Nov 5 23:55:56.898790 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:55:56.898800 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:55:56.898809 kernel: fuse: init (API version 7.41) Nov 5 23:55:56.898820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:55:56.898831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:55:56.898840 kernel: ACPI: bus type drm_connector registered Nov 5 23:55:56.898849 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 23:55:56.898859 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 23:55:56.898868 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:55:56.898880 systemd[1]: verity-setup.service: Deactivated successfully. Nov 5 23:55:56.898890 systemd[1]: Stopped verity-setup.service. Nov 5 23:55:56.898899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 23:55:56.898909 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 23:55:56.898940 systemd-journald[1159]: Collecting audit messages is disabled. Nov 5 23:55:56.898962 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 23:55:56.898974 systemd-journald[1159]: Journal started Nov 5 23:55:56.898994 systemd-journald[1159]: Runtime Journal (/run/log/journal/67fe830be471455197cac8de76d3cf5e) is 6M, max 48.5M, 42.4M free. Nov 5 23:55:56.900471 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 23:55:56.646597 systemd[1]: Queued start job for default target multi-user.target. Nov 5 23:55:56.667538 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 23:55:56.667952 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 23:55:56.904581 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:55:56.905324 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 23:55:56.906643 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 23:55:56.909382 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 23:55:56.910921 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:55:56.912636 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 23:55:56.912823 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 23:55:56.914403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:55:56.914575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:55:56.916102 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:55:56.916304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:55:56.918797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:55:56.918957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:55:56.920514 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 23:55:56.920674 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 23:55:56.922227 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:55:56.922517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:55:56.924132 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:55:56.925686 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:55:56.927328 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 23:55:56.929005 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 23:55:56.941217 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:55:56.943750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 23:55:56.946183 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 23:55:56.947617 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 23:55:56.947664 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:55:56.949738 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 23:55:56.957236 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 23:55:56.958813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:55:56.960163 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 23:55:56.962524 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 23:55:56.963872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:55:56.967511 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 23:55:56.968881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:55:56.970387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:55:56.973585 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 23:55:56.976404 systemd-journald[1159]: Time spent on flushing to /var/log/journal/67fe830be471455197cac8de76d3cf5e is 14.505ms for 882 entries. Nov 5 23:55:56.976404 systemd-journald[1159]: System Journal (/var/log/journal/67fe830be471455197cac8de76d3cf5e) is 8M, max 195.6M, 187.6M free. Nov 5 23:55:57.005861 systemd-journald[1159]: Received client request to flush runtime journal. Nov 5 23:55:57.005919 kernel: loop0: detected capacity change from 0 to 100632 Nov 5 23:55:56.976013 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 23:55:56.981384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:55:56.984025 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 23:55:56.985864 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 23:55:56.989067 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 23:55:56.991990 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 23:55:56.994948 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 23:55:57.007498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:55:57.010246 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 23:55:57.021413 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 23:55:57.025401 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 23:55:57.030506 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 23:55:57.033445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:55:57.051427 kernel: loop1: detected capacity change from 0 to 119368 Nov 5 23:55:57.065945 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Nov 5 23:55:57.065967 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Nov 5 23:55:57.069932 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:55:57.088457 kernel: loop2: detected capacity change from 0 to 200800 Nov 5 23:55:57.107388 kernel: loop3: detected capacity change from 0 to 100632 Nov 5 23:55:57.112364 kernel: loop4: detected capacity change from 0 to 119368 Nov 5 23:55:57.118361 kernel: loop5: detected capacity change from 0 to 200800 Nov 5 23:55:57.122859 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 5 23:55:57.123259 (sd-merge)[1222]: Merged extensions into '/usr'. Nov 5 23:55:57.129979 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 23:55:57.129999 systemd[1]: Reloading... Nov 5 23:55:57.191369 zram_generator::config[1251]: No configuration found. Nov 5 23:55:57.246372 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 23:55:57.344823 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 23:55:57.345010 systemd[1]: Reloading finished in 214 ms. Nov 5 23:55:57.376196 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 23:55:57.377826 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 23:55:57.389626 systemd[1]: Starting ensure-sysext.service... Nov 5 23:55:57.391672 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:55:57.403670 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Nov 5 23:55:57.403689 systemd[1]: Reloading... Nov 5 23:55:57.406728 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 23:55:57.407079 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 23:55:57.407521 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 23:55:57.407812 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 23:55:57.408545 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 23:55:57.408841 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Nov 5 23:55:57.408945 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Nov 5 23:55:57.411802 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:55:57.411922 systemd-tmpfiles[1284]: Skipping /boot Nov 5 23:55:57.418037 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:55:57.418179 systemd-tmpfiles[1284]: Skipping /boot Nov 5 23:55:57.460384 zram_generator::config[1317]: No configuration found. Nov 5 23:55:57.593418 systemd[1]: Reloading finished in 189 ms. Nov 5 23:55:57.613131 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 23:55:57.620262 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:55:57.628503 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:55:57.631322 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 23:55:57.633872 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 23:55:57.637019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:55:57.639999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:55:57.643544 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 23:55:57.651952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:55:57.653259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:55:57.658723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:55:57.663789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:55:57.666637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:55:57.666791 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:55:57.667925 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 23:55:57.672463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:55:57.676158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:55:57.679188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:55:57.681409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:55:57.684391 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:55:57.684577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:55:57.689440 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Nov 5 23:55:57.692334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:55:57.694086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:55:57.697651 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:55:57.699486 augenrules[1378]: No rules Nov 5 23:55:57.707635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:55:57.708920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:55:57.709107 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:55:57.711629 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 23:55:57.715635 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 23:55:57.718291 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:55:57.721523 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:55:57.721722 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:55:57.724099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 23:55:57.727045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:55:57.731604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:55:57.733434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:55:57.733619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:55:57.736552 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:55:57.736781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:55:57.742969 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 23:55:57.747384 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 23:55:57.760198 systemd[1]: Finished ensure-sysext.service. Nov 5 23:55:57.763263 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 5 23:55:57.765945 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:55:57.768540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:55:57.770595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:55:57.772919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:55:57.785877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:55:57.790451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:55:57.792559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:55:57.792601 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:55:57.796235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:55:57.800567 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 23:55:57.802424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 23:55:57.803028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:55:57.803274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:55:57.806840 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:55:57.807258 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:55:57.809264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:55:57.809607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:55:57.811777 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:55:57.811936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:55:57.820546 augenrules[1425]: /sbin/augenrules: No change Nov 5 23:55:57.823275 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:55:57.823614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:55:57.829256 augenrules[1457]: No rules Nov 5 23:55:57.830740 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:55:57.831001 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:55:57.857858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 23:55:57.860433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 23:55:57.877578 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 23:55:57.887618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 23:55:57.941810 systemd-networkd[1433]: lo: Link UP Nov 5 23:55:57.941822 systemd-networkd[1433]: lo: Gained carrier Nov 5 23:55:57.942804 systemd-networkd[1433]: Enumeration completed Nov 5 23:55:57.942922 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:55:57.945911 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:55:57.945925 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:55:57.946548 systemd-networkd[1433]: eth0: Link UP Nov 5 23:55:57.946657 systemd-networkd[1433]: eth0: Gained carrier Nov 5 23:55:57.946677 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:55:57.946868 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 23:55:57.950959 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 23:55:57.951388 systemd-resolved[1350]: Positive Trust Anchors: Nov 5 23:55:57.951400 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:55:57.951432 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:55:57.953545 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 23:55:57.955021 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 23:55:57.960484 systemd-resolved[1350]: Defaulting to hostname 'linux'. Nov 5 23:55:57.961980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:55:57.963387 systemd[1]: Reached target network.target - Network. Nov 5 23:55:57.964412 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:55:57.965745 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 23:55:57.966376 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Nov 5 23:55:57.966465 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:55:57.967768 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 23:55:57.485678 systemd-resolved[1350]: Clock change detected. Flushing caches. Nov 5 23:55:57.500901 systemd-journald[1159]: Time jumped backwards, rotating. Nov 5 23:55:57.485736 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 23:55:57.485797 systemd-timesyncd[1435]: Initial clock synchronization to Wed 2025-11-05 23:55:57.485634 UTC. Nov 5 23:55:57.487557 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 23:55:57.490122 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 23:55:57.491705 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 23:55:57.494506 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 23:55:57.496172 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 23:55:57.496208 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:55:57.497331 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:55:57.499180 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 23:55:57.501870 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 23:55:57.505721 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 23:55:57.508691 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 23:55:57.510324 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 23:55:57.521250 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 23:55:57.523011 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 23:55:57.527652 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 23:55:57.529578 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 23:55:57.538767 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:55:57.540152 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:55:57.541381 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:55:57.541566 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:55:57.543091 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 23:55:57.545592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 23:55:57.547885 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 23:55:57.557550 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 23:55:57.560071 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 23:55:57.561297 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 23:55:57.562681 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 23:55:57.565388 jq[1500]: false Nov 5 23:55:57.565819 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 23:55:57.570025 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 23:55:57.573349 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 23:55:57.579066 extend-filesystems[1501]: Found /dev/vda6 Nov 5 23:55:57.579782 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 23:55:57.581839 extend-filesystems[1501]: Found /dev/vda9 Nov 5 23:55:57.583321 extend-filesystems[1501]: Checking size of /dev/vda9 Nov 5 23:55:57.587933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:55:57.590518 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 23:55:57.591131 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 23:55:57.592630 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 23:55:57.595297 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 23:55:57.603426 jq[1525]: true Nov 5 23:55:57.605651 extend-filesystems[1501]: Resized partition /dev/vda9 Nov 5 23:55:57.605975 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 23:55:57.608572 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 23:55:57.608882 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 23:55:57.609326 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 23:55:57.610358 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 23:55:57.613816 extend-filesystems[1529]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 23:55:57.614241 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 23:55:57.614908 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 23:55:57.624446 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 5 23:55:57.631113 update_engine[1523]: I20251105 23:55:57.630333 1523 main.cc:92] Flatcar Update Engine starting Nov 5 23:55:57.649833 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 23:55:57.657965 tar[1531]: linux-arm64/LICENSE Nov 5 23:55:57.657965 tar[1531]: linux-arm64/helm Nov 5 23:55:57.659065 jq[1532]: true Nov 5 23:55:57.672151 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 23:55:57.673576 systemd-logind[1514]: New seat seat0. Nov 5 23:55:57.674695 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 23:55:57.685555 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 5 23:55:57.688835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:55:57.689662 dbus-daemon[1498]: [system] SELinux support is enabled Nov 5 23:55:57.696253 dbus-daemon[1498]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 23:55:57.709827 update_engine[1523]: I20251105 23:55:57.699576 1523 update_check_scheduler.cc:74] Next update check in 10m31s Nov 5 23:55:57.690873 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 23:55:57.695262 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 23:55:57.709939 extend-filesystems[1529]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 23:55:57.709939 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 23:55:57.709939 extend-filesystems[1529]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 5 23:55:57.695289 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 23:55:57.718662 extend-filesystems[1501]: Resized filesystem in /dev/vda9 Nov 5 23:55:57.697549 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 23:55:57.697570 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 23:55:57.699741 systemd[1]: Started update-engine.service - Update Engine. Nov 5 23:55:57.703267 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 23:55:57.711392 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 23:55:57.712509 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 23:55:57.731621 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Nov 5 23:55:57.734071 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 23:55:57.739888 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 23:55:57.763670 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 23:55:57.833358 containerd[1536]: time="2025-11-05T23:55:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 23:55:57.834261 containerd[1536]: time="2025-11-05T23:55:57.834198241Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 23:55:57.851738 containerd[1536]: time="2025-11-05T23:55:57.851624681Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.76µs" Nov 5 23:55:57.851738 containerd[1536]: time="2025-11-05T23:55:57.851668441Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 23:55:57.851902 containerd[1536]: time="2025-11-05T23:55:57.851687361Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 23:55:57.852311 containerd[1536]: time="2025-11-05T23:55:57.852284281Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 23:55:57.852458 containerd[1536]: time="2025-11-05T23:55:57.852425721Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 23:55:57.852590 containerd[1536]: time="2025-11-05T23:55:57.852571641Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:55:57.852833 containerd[1536]: time="2025-11-05T23:55:57.852809281Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:55:57.853025 containerd[1536]: time="2025-11-05T23:55:57.852896481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:55:57.853552 containerd[1536]: time="2025-11-05T23:55:57.853480081Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:55:57.853692 containerd[1536]: time="2025-11-05T23:55:57.853623281Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:55:57.853692 containerd[1536]: time="2025-11-05T23:55:57.853745241Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:55:57.853692 containerd[1536]: time="2025-11-05T23:55:57.853760121Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 23:55:57.854110 containerd[1536]: time="2025-11-05T23:55:57.854086881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 23:55:57.854668 containerd[1536]: time="2025-11-05T23:55:57.854597281Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:55:57.854782 containerd[1536]: time="2025-11-05T23:55:57.854764241Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:55:57.854894 containerd[1536]: time="2025-11-05T23:55:57.854877041Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 23:55:57.855042 containerd[1536]: time="2025-11-05T23:55:57.855024801Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 23:55:57.855743 containerd[1536]: time="2025-11-05T23:55:57.855713401Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 23:55:57.855877 containerd[1536]: time="2025-11-05T23:55:57.855859241Z" level=info msg="metadata content store policy set" policy=shared Nov 5 23:55:57.859687 containerd[1536]: time="2025-11-05T23:55:57.859654841Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 23:55:57.859884 containerd[1536]: time="2025-11-05T23:55:57.859866161Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 23:55:57.860030 containerd[1536]: time="2025-11-05T23:55:57.860010641Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 23:55:57.860142 containerd[1536]: time="2025-11-05T23:55:57.860126161Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 23:55:57.860234 containerd[1536]: time="2025-11-05T23:55:57.860213641Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 23:55:57.860334 containerd[1536]: time="2025-11-05T23:55:57.860316081Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 23:55:57.860407 containerd[1536]: time="2025-11-05T23:55:57.860394081Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860519721Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860545041Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860557641Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860575881Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860594041Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860722441Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860744321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860760721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860774241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860786321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860802521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860814641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860825121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860845481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 23:55:57.861623 containerd[1536]: time="2025-11-05T23:55:57.860857841Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 23:55:57.861910 containerd[1536]: time="2025-11-05T23:55:57.860867881Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 23:55:57.861910 containerd[1536]: time="2025-11-05T23:55:57.861055921Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 23:55:57.861910 containerd[1536]: time="2025-11-05T23:55:57.861181961Z" level=info msg="Start snapshots syncer" Nov 5 23:55:57.861910 containerd[1536]: time="2025-11-05T23:55:57.861228841Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 23:55:57.862377 containerd[1536]: time="2025-11-05T23:55:57.862326601Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 23:55:57.862730 containerd[1536]: time="2025-11-05T23:55:57.862687241Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 23:55:57.862874 containerd[1536]: time="2025-11-05T23:55:57.862844481Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 23:55:57.863439 containerd[1536]: time="2025-11-05T23:55:57.863399121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 23:55:57.863478 containerd[1536]: time="2025-11-05T23:55:57.863457561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 23:55:57.863478 containerd[1536]: time="2025-11-05T23:55:57.863472321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 23:55:57.863534 containerd[1536]: time="2025-11-05T23:55:57.863484001Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 23:55:57.863534 containerd[1536]: time="2025-11-05T23:55:57.863497361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 23:55:57.863534 containerd[1536]: time="2025-11-05T23:55:57.863508201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 23:55:57.863534 containerd[1536]: time="2025-11-05T23:55:57.863519481Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 23:55:57.863594 containerd[1536]: time="2025-11-05T23:55:57.863547841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 23:55:57.863594 containerd[1536]: time="2025-11-05T23:55:57.863559521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 23:55:57.863594 containerd[1536]: time="2025-11-05T23:55:57.863570681Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 23:55:57.863689 containerd[1536]: time="2025-11-05T23:55:57.863626121Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:55:57.863689 containerd[1536]: time="2025-11-05T23:55:57.863643521Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:55:57.863689 containerd[1536]: time="2025-11-05T23:55:57.863653001Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:55:57.863689 containerd[1536]: time="2025-11-05T23:55:57.863663081Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:55:57.863754 containerd[1536]: time="2025-11-05T23:55:57.863727401Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 23:55:57.863754 containerd[1536]: time="2025-11-05T23:55:57.863741121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 23:55:57.863789 containerd[1536]: time="2025-11-05T23:55:57.863753281Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 23:55:57.863924 containerd[1536]: time="2025-11-05T23:55:57.863831201Z" level=info msg="runtime interface created" Nov 5 23:55:57.863924 containerd[1536]: time="2025-11-05T23:55:57.863841321Z" level=info msg="created NRI interface" Nov 5 23:55:57.863924 containerd[1536]: time="2025-11-05T23:55:57.863855521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 23:55:57.863924 containerd[1536]: time="2025-11-05T23:55:57.863870521Z" level=info msg="Connect containerd service" Nov 5 23:55:57.863924 containerd[1536]: time="2025-11-05T23:55:57.863906201Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 23:55:57.864768 containerd[1536]: time="2025-11-05T23:55:57.864738361Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 23:55:57.948651 containerd[1536]: time="2025-11-05T23:55:57.948476121Z" level=info msg="Start subscribing containerd event" Nov 5 23:55:57.949099 containerd[1536]: time="2025-11-05T23:55:57.949058561Z" level=info msg="Start recovering state" Nov 5 23:55:57.949500 containerd[1536]: time="2025-11-05T23:55:57.949381881Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 23:55:57.949548 containerd[1536]: time="2025-11-05T23:55:57.949473481Z" level=info msg="Start event monitor" Nov 5 23:55:57.949573 containerd[1536]: time="2025-11-05T23:55:57.949563681Z" level=info msg="Start cni network conf syncer for default" Nov 5 23:55:57.949592 containerd[1536]: time="2025-11-05T23:55:57.949574801Z" level=info msg="Start streaming server" Nov 5 23:55:57.949592 containerd[1536]: time="2025-11-05T23:55:57.949584801Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 23:55:57.949646 containerd[1536]: time="2025-11-05T23:55:57.949592081Z" level=info msg="runtime interface starting up..." Nov 5 23:55:57.949694 containerd[1536]: time="2025-11-05T23:55:57.949677561Z" level=info msg="starting plugins..." Nov 5 23:55:57.949725 containerd[1536]: time="2025-11-05T23:55:57.949709841Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 23:55:57.950120 containerd[1536]: time="2025-11-05T23:55:57.949995121Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 23:55:57.950484 containerd[1536]: time="2025-11-05T23:55:57.950457761Z" level=info msg="containerd successfully booted in 0.117493s" Nov 5 23:55:57.950492 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 23:55:57.977959 tar[1531]: linux-arm64/README.md Nov 5 23:55:58.000888 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 23:55:58.343159 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 23:55:58.364524 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 23:55:58.367376 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 23:55:58.386696 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 23:55:58.386921 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 23:55:58.389839 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 23:55:58.418538 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 23:55:58.421576 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 23:55:58.423977 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 5 23:55:58.425546 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 23:55:59.181563 systemd-networkd[1433]: eth0: Gained IPv6LL Nov 5 23:55:59.184105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 23:55:59.186117 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 23:55:59.188893 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 23:55:59.191751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:55:59.206373 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 23:55:59.229505 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 23:55:59.231256 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 23:55:59.231596 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 23:55:59.234157 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 23:55:59.752219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:55:59.754270 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 23:55:59.755700 systemd[1]: Startup finished in 2.031s (kernel) + 4.639s (initrd) + 4.045s (userspace) = 10.717s. Nov 5 23:55:59.764087 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:56:00.075047 kubelet[1636]: E1105 23:56:00.074916 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:56:00.077141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:56:00.077296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:56:00.077739 systemd[1]: kubelet.service: Consumed 702ms CPU time, 248.6M memory peak. Nov 5 23:56:03.539914 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 23:56:03.540989 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:58140.service - OpenSSH per-connection server daemon (10.0.0.1:58140). Nov 5 23:56:03.664467 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 58140 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:56:03.666091 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:56:03.677489 systemd-logind[1514]: New session 1 of user core. Nov 5 23:56:03.678524 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 23:56:03.679991 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 23:56:03.708080 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 23:56:03.710169 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 23:56:03.726117 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 23:56:03.728265 systemd-logind[1514]: New session c1 of user core. Nov 5 23:56:03.834203 systemd[1654]: Queued start job for default target default.target. Nov 5 23:56:03.855379 systemd[1654]: Created slice app.slice - User Application Slice. Nov 5 23:56:03.855409 systemd[1654]: Reached target paths.target - Paths. Nov 5 23:56:03.855468 systemd[1654]: Reached target timers.target - Timers. Nov 5 23:56:03.856592 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 23:56:03.865892 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 23:56:03.865958 systemd[1654]: Reached target sockets.target - Sockets. Nov 5 23:56:03.865994 systemd[1654]: Reached target basic.target - Basic System. Nov 5 23:56:03.866022 systemd[1654]: Reached target default.target - Main User Target. Nov 5 23:56:03.866052 systemd[1654]: Startup finished in 133ms. Nov 5 23:56:03.866173 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 23:56:03.867458 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 23:56:03.928419 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:58154.service - OpenSSH per-connection server daemon (10.0.0.1:58154). Nov 5 23:56:03.986948 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 58154 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:56:03.988233 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:56:03.992504 systemd-logind[1514]: New session 2 of user core. Nov 5 23:56:04.006601 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 23:56:04.057484 sshd[1668]: Connection closed by 10.0.0.1 port 58154 Nov 5 23:56:04.057935 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Nov 5 23:56:04.067479 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:58154.service: Deactivated successfully. Nov 5 23:56:04.069798 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 23:56:04.071088 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Nov 5 23:56:04.073571 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:58156.service - OpenSSH per-connection server daemon (10.0.0.1:58156). Nov 5 23:56:04.074501 systemd-logind[1514]: Removed session 2. Nov 5 23:56:04.132580 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 58156 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:56:04.133394 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:56:04.137648 systemd-logind[1514]: New session 3 of user core. Nov 5 23:56:04.146596 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 23:56:04.194951 sshd[1677]: Connection closed by 10.0.0.1 port 58156 Nov 5 23:56:04.195973 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Nov 5 23:56:04.210659 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:58156.service: Deactivated successfully. Nov 5 23:56:04.212156 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 23:56:04.213994 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Nov 5 23:56:04.216014 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:58170.service - OpenSSH per-connection server daemon (10.0.0.1:58170). Nov 5 23:56:04.217690 systemd-logind[1514]: Removed session 3. Nov 5 23:56:04.275698 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 58170 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:56:04.277015 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:56:04.281312 systemd-logind[1514]: New session 4 of user core. Nov 5 23:56:04.299621 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 23:56:04.352131 sshd[1686]: Connection closed by 10.0.0.1 port 58170 Nov 5 23:56:04.351984 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Nov 5 23:56:04.361471 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:58170.service: Deactivated successfully. Nov 5 23:56:04.363052 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 23:56:04.364990 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Nov 5 23:56:04.367445 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:58174.service - OpenSSH per-connection server daemon (10.0.0.1:58174). Nov 5 23:56:04.367969 systemd-logind[1514]: Removed session 4. Nov 5 23:56:04.430907 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 58174 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:56:04.432255 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:56:04.436686 systemd-logind[1514]: New session 5 of user core. Nov 5 23:56:04.452623 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 23:56:04.508488 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 23:56:04.508760 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:56:04.526352 sudo[1697]: pam_unix(sudo:session): session closed for user root Nov 5 23:56:04.527946 sshd[1696]: Connection closed by 10.0.0.1 port 58174 Nov 5 23:56:04.528530 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Nov 5 23:56:04.536637 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:58174.service: Deactivated successfully. Nov 5 23:56:04.538148 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 23:56:04.538956 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Nov 5 23:56:04.541655 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:58176.service - OpenSSH per-connection server daemon (10.0.0.1:58176). Nov 5 23:56:04.542636 systemd-logind[1514]: Removed session 5. Nov 5 23:56:04.599031 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 58176 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:56:04.600482 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:56:04.609813 systemd-logind[1514]: New session 6 of user core. Nov 5 23:56:04.621673 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 23:56:04.672964 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 23:56:04.673581 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:56:04.745556 sudo[1708]: pam_unix(sudo:session): session closed for user root Nov 5 23:56:04.750794 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 23:56:04.751054 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:56:04.760745 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:56:04.810449 augenrules[1730]: No rules Nov 5 23:56:04.811667 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:56:04.813498 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:56:04.814887 sudo[1707]: pam_unix(sudo:session): session closed for user root Nov 5 23:56:04.817706 sshd[1706]: Connection closed by 10.0.0.1 port 58176 Nov 5 23:56:04.817384 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Nov 5 23:56:04.826813 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:58176.service: Deactivated successfully. Nov 5 23:56:04.828482 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 23:56:04.830500 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Nov 5 23:56:04.832201 systemd-logind[1514]: Removed session 6. Nov 5 23:56:04.833921 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:58192.service - OpenSSH per-connection server daemon (10.0.0.1:58192). Nov 5 23:56:04.891276 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 58192 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:56:04.892672 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:56:04.897372 systemd-logind[1514]: New session 7 of user core. Nov 5 23:56:04.913620 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 23:56:04.965583 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 23:56:04.966234 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:56:05.240942 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 23:56:05.255731 (dockerd)[1764]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 23:56:05.455218 dockerd[1764]: time="2025-11-05T23:56:05.455141481Z" level=info msg="Starting up" Nov 5 23:56:05.457596 dockerd[1764]: time="2025-11-05T23:56:05.456740361Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 23:56:05.472003 dockerd[1764]: time="2025-11-05T23:56:05.471937681Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 23:56:05.486017 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2468153363-merged.mount: Deactivated successfully. Nov 5 23:56:05.508569 dockerd[1764]: time="2025-11-05T23:56:05.508467321Z" level=info msg="Loading containers: start." Nov 5 23:56:05.518529 kernel: Initializing XFRM netlink socket Nov 5 23:56:05.721065 systemd-networkd[1433]: docker0: Link UP Nov 5 23:56:05.724778 dockerd[1764]: time="2025-11-05T23:56:05.724739201Z" level=info msg="Loading containers: done." Nov 5 23:56:05.737686 dockerd[1764]: time="2025-11-05T23:56:05.737342841Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 23:56:05.737686 dockerd[1764]: time="2025-11-05T23:56:05.737445081Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 23:56:05.737686 dockerd[1764]: time="2025-11-05T23:56:05.737524001Z" level=info msg="Initializing buildkit" Nov 5 23:56:05.758971 dockerd[1764]: time="2025-11-05T23:56:05.758887161Z" level=info msg="Completed buildkit initialization" Nov 5 23:56:05.765693 dockerd[1764]: time="2025-11-05T23:56:05.765599401Z" level=info msg="Daemon has completed initialization" Nov 5 23:56:05.765787 dockerd[1764]: time="2025-11-05T23:56:05.765661041Z" level=info msg="API listen on /run/docker.sock" Nov 5 23:56:05.765977 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 23:56:06.179757 containerd[1536]: time="2025-11-05T23:56:06.179717281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 5 23:56:06.712395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842864071.mount: Deactivated successfully. Nov 5 23:56:07.677605 containerd[1536]: time="2025-11-05T23:56:07.677547841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:07.678740 containerd[1536]: time="2025-11-05T23:56:07.678704921Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Nov 5 23:56:07.680465 containerd[1536]: time="2025-11-05T23:56:07.679895361Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:07.685218 containerd[1536]: time="2025-11-05T23:56:07.685173761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:07.686774 containerd[1536]: time="2025-11-05T23:56:07.686738561Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.5069856s" Nov 5 23:56:07.686840 containerd[1536]: time="2025-11-05T23:56:07.686784281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 5 23:56:07.687322 containerd[1536]: time="2025-11-05T23:56:07.687298081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 5 23:56:08.672073 containerd[1536]: time="2025-11-05T23:56:08.672001001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:08.672863 containerd[1536]: time="2025-11-05T23:56:08.672625721Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Nov 5 23:56:08.673815 containerd[1536]: time="2025-11-05T23:56:08.673776081Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:08.676779 containerd[1536]: time="2025-11-05T23:56:08.676743201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:08.678154 containerd[1536]: time="2025-11-05T23:56:08.678104241Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 990.77124ms" Nov 5 23:56:08.678154 containerd[1536]: time="2025-11-05T23:56:08.678148641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 5 23:56:08.678891 containerd[1536]: time="2025-11-05T23:56:08.678868161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 5 23:56:09.560540 containerd[1536]: time="2025-11-05T23:56:09.560482681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:09.560998 containerd[1536]: time="2025-11-05T23:56:09.560964121Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Nov 5 23:56:09.561837 containerd[1536]: time="2025-11-05T23:56:09.561790841Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:09.564346 containerd[1536]: time="2025-11-05T23:56:09.564317761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:09.565545 containerd[1536]: time="2025-11-05T23:56:09.565401561Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 886.38968ms" Nov 5 23:56:09.565545 containerd[1536]: time="2025-11-05T23:56:09.565451281Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 5 23:56:09.565918 containerd[1536]: time="2025-11-05T23:56:09.565895041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 5 23:56:10.327191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 23:56:10.329587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:56:10.476058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:56:10.479205 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:56:10.511719 kubelet[2060]: E1105 23:56:10.511679 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:56:10.514989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:56:10.515130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:56:10.515807 systemd[1]: kubelet.service: Consumed 135ms CPU time, 109.5M memory peak. Nov 5 23:56:10.655978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081626815.mount: Deactivated successfully. Nov 5 23:56:10.886603 containerd[1536]: time="2025-11-05T23:56:10.886526721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:10.887153 containerd[1536]: time="2025-11-05T23:56:10.887101121Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Nov 5 23:56:10.887906 containerd[1536]: time="2025-11-05T23:56:10.887872641Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:10.889508 containerd[1536]: time="2025-11-05T23:56:10.889483281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:10.890043 containerd[1536]: time="2025-11-05T23:56:10.890010921Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.32408672s" Nov 5 23:56:10.890043 containerd[1536]: time="2025-11-05T23:56:10.890038961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 5 23:56:10.890473 containerd[1536]: time="2025-11-05T23:56:10.890450921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 5 23:56:11.413461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766340790.mount: Deactivated successfully. Nov 5 23:56:12.311643 containerd[1536]: time="2025-11-05T23:56:12.311592081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:12.312543 containerd[1536]: time="2025-11-05T23:56:12.312520761Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Nov 5 23:56:12.313186 containerd[1536]: time="2025-11-05T23:56:12.313089321Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:12.315774 containerd[1536]: time="2025-11-05T23:56:12.315742921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:12.316988 containerd[1536]: time="2025-11-05T23:56:12.316942401Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.4264598s" Nov 5 23:56:12.316988 containerd[1536]: time="2025-11-05T23:56:12.316982241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 5 23:56:12.317411 containerd[1536]: time="2025-11-05T23:56:12.317376161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 5 23:56:12.778492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529575985.mount: Deactivated successfully. Nov 5 23:56:12.783374 containerd[1536]: time="2025-11-05T23:56:12.783324721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:12.784474 containerd[1536]: time="2025-11-05T23:56:12.784443681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Nov 5 23:56:12.785450 containerd[1536]: time="2025-11-05T23:56:12.785407961Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:12.787523 containerd[1536]: time="2025-11-05T23:56:12.787488001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:12.788088 containerd[1536]: time="2025-11-05T23:56:12.788052681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 470.56676ms" Nov 5 23:56:12.788088 containerd[1536]: time="2025-11-05T23:56:12.788083001Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 5 23:56:12.788552 containerd[1536]: time="2025-11-05T23:56:12.788486401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 5 23:56:15.937237 containerd[1536]: time="2025-11-05T23:56:15.936252441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:15.937594 containerd[1536]: time="2025-11-05T23:56:15.937568721Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Nov 5 23:56:15.938691 containerd[1536]: time="2025-11-05T23:56:15.938663241Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:15.942853 containerd[1536]: time="2025-11-05T23:56:15.942696401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:15.943867 containerd[1536]: time="2025-11-05T23:56:15.943825761Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.15530936s" Nov 5 23:56:15.943867 containerd[1536]: time="2025-11-05T23:56:15.943861241Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 5 23:56:19.726928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:56:19.727451 systemd[1]: kubelet.service: Consumed 135ms CPU time, 109.5M memory peak. Nov 5 23:56:19.729879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:56:19.754500 systemd[1]: Reload requested from client PID 2199 ('systemctl') (unit session-7.scope)... Nov 5 23:56:19.754515 systemd[1]: Reloading... Nov 5 23:56:19.819459 zram_generator::config[2241]: No configuration found. Nov 5 23:56:20.021173 systemd[1]: Reloading finished in 266 ms. Nov 5 23:56:20.071930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:56:20.074523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:56:20.075252 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 23:56:20.075567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:56:20.075651 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.2M memory peak. Nov 5 23:56:20.076990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:56:20.182820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:56:20.187546 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:56:20.221922 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:56:20.221922 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:56:20.222176 kubelet[2288]: I1105 23:56:20.221989 2288 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:56:21.161302 kubelet[2288]: I1105 23:56:21.161251 2288 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 23:56:21.161302 kubelet[2288]: I1105 23:56:21.161286 2288 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:56:21.161302 kubelet[2288]: I1105 23:56:21.161313 2288 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 23:56:21.161478 kubelet[2288]: I1105 23:56:21.161319 2288 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:56:21.161587 kubelet[2288]: I1105 23:56:21.161558 2288 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:56:21.168222 kubelet[2288]: E1105 23:56:21.168182 2288 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 23:56:21.168358 kubelet[2288]: I1105 23:56:21.168338 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:56:21.178437 kubelet[2288]: I1105 23:56:21.176619 2288 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:56:21.179233 kubelet[2288]: I1105 23:56:21.179214 2288 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 23:56:21.179493 kubelet[2288]: I1105 23:56:21.179468 2288 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:56:21.179633 kubelet[2288]: I1105 23:56:21.179493 2288 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:56:21.179710 kubelet[2288]: I1105 23:56:21.179634 2288 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:56:21.179710 kubelet[2288]: I1105 23:56:21.179642 2288 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 23:56:21.179758 kubelet[2288]: I1105 23:56:21.179740 2288 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 23:56:21.181825 kubelet[2288]: I1105 23:56:21.181795 2288 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:56:21.182937 kubelet[2288]: I1105 23:56:21.182904 2288 kubelet.go:475] "Attempting to sync node with API server" Nov 5 23:56:21.182937 kubelet[2288]: I1105 23:56:21.182932 2288 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:56:21.183749 kubelet[2288]: E1105 23:56:21.183705 2288 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:56:21.186148 kubelet[2288]: I1105 23:56:21.184023 2288 kubelet.go:387] "Adding apiserver pod source" Nov 5 23:56:21.186148 kubelet[2288]: I1105 23:56:21.184251 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:56:21.186148 kubelet[2288]: E1105 23:56:21.184754 2288 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:56:21.186148 kubelet[2288]: I1105 23:56:21.186061 2288 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:56:21.186791 kubelet[2288]: I1105 23:56:21.186743 2288 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:56:21.186791 kubelet[2288]: I1105 23:56:21.186784 2288 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 23:56:21.186862 kubelet[2288]: W1105 23:56:21.186822 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 23:56:21.189070 kubelet[2288]: I1105 23:56:21.189037 2288 server.go:1262] "Started kubelet" Nov 5 23:56:21.189561 kubelet[2288]: I1105 23:56:21.189470 2288 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:56:21.189561 kubelet[2288]: I1105 23:56:21.189524 2288 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 23:56:21.189678 kubelet[2288]: I1105 23:56:21.189655 2288 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:56:21.189805 kubelet[2288]: I1105 23:56:21.189780 2288 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:56:21.190304 kubelet[2288]: I1105 23:56:21.190273 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:56:21.190680 kubelet[2288]: I1105 23:56:21.190661 2288 server.go:310] "Adding debug handlers to kubelet server" Nov 5 23:56:21.191421 kubelet[2288]: I1105 23:56:21.191395 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:56:21.193219 kubelet[2288]: E1105 23:56:21.193197 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:21.193301 kubelet[2288]: I1105 23:56:21.193291 2288 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 23:56:21.193559 kubelet[2288]: I1105 23:56:21.193541 2288 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 23:56:21.193662 kubelet[2288]: I1105 23:56:21.193652 2288 reconciler.go:29] "Reconciler: start to sync state" Nov 5 23:56:21.194034 kubelet[2288]: E1105 23:56:21.194013 2288 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:56:21.194623 kubelet[2288]: E1105 23:56:21.194589 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Nov 5 23:56:21.194900 kubelet[2288]: I1105 23:56:21.194883 2288 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:56:21.195046 kubelet[2288]: I1105 23:56:21.195029 2288 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:56:21.196228 kubelet[2288]: E1105 23:56:21.196205 2288 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:56:21.196483 kubelet[2288]: I1105 23:56:21.196467 2288 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:56:21.204907 kubelet[2288]: E1105 23:56:21.203788 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875419f328b33f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 23:56:21.189006321 +0000 UTC m=+0.998902641,LastTimestamp:2025-11-05 23:56:21.189006321 +0000 UTC m=+0.998902641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 23:56:21.213907 kubelet[2288]: I1105 23:56:21.213873 2288 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 23:56:21.214776 kubelet[2288]: I1105 23:56:21.214746 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:56:21.214776 kubelet[2288]: I1105 23:56:21.214768 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:56:21.214847 kubelet[2288]: I1105 23:56:21.214785 2288 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:56:21.216133 kubelet[2288]: I1105 23:56:21.216105 2288 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 23:56:21.217248 kubelet[2288]: I1105 23:56:21.216181 2288 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 23:56:21.217248 kubelet[2288]: I1105 23:56:21.216207 2288 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 23:56:21.217248 kubelet[2288]: E1105 23:56:21.216239 2288 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:56:21.217248 kubelet[2288]: E1105 23:56:21.216883 2288 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:56:21.218178 kubelet[2288]: I1105 23:56:21.218160 2288 policy_none.go:49] "None policy: Start" Nov 5 23:56:21.218178 kubelet[2288]: I1105 23:56:21.218180 2288 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 23:56:21.218286 kubelet[2288]: I1105 23:56:21.218191 2288 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 23:56:21.219707 kubelet[2288]: I1105 23:56:21.219677 2288 policy_none.go:47] "Start" Nov 5 23:56:21.223376 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 23:56:21.243265 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 23:56:21.246490 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 23:56:21.267524 kubelet[2288]: E1105 23:56:21.267489 2288 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:56:21.267829 kubelet[2288]: I1105 23:56:21.267711 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:56:21.267829 kubelet[2288]: I1105 23:56:21.267724 2288 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:56:21.268014 kubelet[2288]: I1105 23:56:21.267985 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:56:21.269651 kubelet[2288]: E1105 23:56:21.269629 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:56:21.269718 kubelet[2288]: E1105 23:56:21.269685 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 23:56:21.333768 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 5 23:56:21.344185 kubelet[2288]: E1105 23:56:21.344142 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:21.346296 systemd[1]: Created slice kubepods-burstable-pod15a91e4eb9a14c34bb44125e403ef821.slice - libcontainer container kubepods-burstable-pod15a91e4eb9a14c34bb44125e403ef821.slice. Nov 5 23:56:21.357547 kubelet[2288]: E1105 23:56:21.357524 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:21.360014 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 5 23:56:21.361708 kubelet[2288]: E1105 23:56:21.361676 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:21.369678 kubelet[2288]: I1105 23:56:21.369645 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:56:21.370137 kubelet[2288]: E1105 23:56:21.370103 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Nov 5 23:56:21.395726 kubelet[2288]: E1105 23:56:21.395689 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Nov 5 23:56:21.495208 kubelet[2288]: I1105 23:56:21.495057 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 23:56:21.495208 kubelet[2288]: I1105 23:56:21.495123 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15a91e4eb9a14c34bb44125e403ef821-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15a91e4eb9a14c34bb44125e403ef821\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:21.495208 kubelet[2288]: I1105 23:56:21.495145 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:21.495208 kubelet[2288]: I1105 23:56:21.495158 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15a91e4eb9a14c34bb44125e403ef821-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15a91e4eb9a14c34bb44125e403ef821\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:21.495208 kubelet[2288]: I1105 23:56:21.495172 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15a91e4eb9a14c34bb44125e403ef821-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15a91e4eb9a14c34bb44125e403ef821\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:21.495415 kubelet[2288]: I1105 23:56:21.495190 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:21.495415 kubelet[2288]: I1105 23:56:21.495207 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:21.495415 kubelet[2288]: I1105 23:56:21.495220 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:21.495415 kubelet[2288]: I1105 23:56:21.495236 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:21.571581 kubelet[2288]: I1105 23:56:21.571545 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:56:21.571888 kubelet[2288]: E1105 23:56:21.571867 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Nov 5 23:56:21.647367 containerd[1536]: time="2025-11-05T23:56:21.647328081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 5 23:56:21.664404 containerd[1536]: time="2025-11-05T23:56:21.664358881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15a91e4eb9a14c34bb44125e403ef821,Namespace:kube-system,Attempt:0,}" Nov 5 23:56:21.668235 containerd[1536]: time="2025-11-05T23:56:21.668198041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 5 23:56:21.796904 kubelet[2288]: E1105 23:56:21.796790 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Nov 5 23:56:21.973945 kubelet[2288]: I1105 23:56:21.973898 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:56:21.974302 kubelet[2288]: E1105 23:56:21.974270 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Nov 5 23:56:22.169617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475517826.mount: Deactivated successfully. Nov 5 23:56:22.174038 containerd[1536]: time="2025-11-05T23:56:22.173986161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:56:22.175493 containerd[1536]: time="2025-11-05T23:56:22.175459801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 5 23:56:22.176946 kubelet[2288]: E1105 23:56:22.176901 2288 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:56:22.178044 containerd[1536]: time="2025-11-05T23:56:22.177990361Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:56:22.180275 containerd[1536]: time="2025-11-05T23:56:22.180227961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:56:22.181463 containerd[1536]: time="2025-11-05T23:56:22.181361761Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:56:22.181775 containerd[1536]: time="2025-11-05T23:56:22.181729521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 23:56:22.182316 containerd[1536]: time="2025-11-05T23:56:22.182297281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 23:56:22.183005 containerd[1536]: time="2025-11-05T23:56:22.182981441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:56:22.183679 containerd[1536]: time="2025-11-05T23:56:22.183653601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.08276ms" Nov 5 23:56:22.187334 containerd[1536]: time="2025-11-05T23:56:22.187286121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 517.17316ms" Nov 5 23:56:22.188043 containerd[1536]: time="2025-11-05T23:56:22.188011881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 519.20144ms" Nov 5 23:56:22.204103 containerd[1536]: time="2025-11-05T23:56:22.204029761Z" level=info msg="connecting to shim abe27e8a13e2d4b6ac409e9b0a34e69b635b6179cbe0c2d04b903e538049161c" address="unix:///run/containerd/s/90064db5a0963e318a99623cfaa8fefa6ebcf57caece759bab537cff3de6cdb5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:56:22.216063 containerd[1536]: time="2025-11-05T23:56:22.216011161Z" level=info msg="connecting to shim 2c276705f5b696c8c21c1344225c30fe0a160462f955b3dc1a1864d51418cd9e" address="unix:///run/containerd/s/9333798d056553332328d3ad4b5b687e959ff4b082cd70f114a5cd306b756648" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:56:22.217960 containerd[1536]: time="2025-11-05T23:56:22.217586241Z" level=info msg="connecting to shim 2696ce4f765504716ba789cd8040e7173378e94aea71063be7fcf7dcb3579ab4" address="unix:///run/containerd/s/61d3cb9d9650527c30fb919ff156b1b240e5b726b22c974fe02fdbecb0c1381a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:56:22.236632 systemd[1]: Started cri-containerd-abe27e8a13e2d4b6ac409e9b0a34e69b635b6179cbe0c2d04b903e538049161c.scope - libcontainer container abe27e8a13e2d4b6ac409e9b0a34e69b635b6179cbe0c2d04b903e538049161c. Nov 5 23:56:22.243030 systemd[1]: Started cri-containerd-2696ce4f765504716ba789cd8040e7173378e94aea71063be7fcf7dcb3579ab4.scope - libcontainer container 2696ce4f765504716ba789cd8040e7173378e94aea71063be7fcf7dcb3579ab4. Nov 5 23:56:22.244780 systemd[1]: Started cri-containerd-2c276705f5b696c8c21c1344225c30fe0a160462f955b3dc1a1864d51418cd9e.scope - libcontainer container 2c276705f5b696c8c21c1344225c30fe0a160462f955b3dc1a1864d51418cd9e. Nov 5 23:56:22.284974 containerd[1536]: time="2025-11-05T23:56:22.284931041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"abe27e8a13e2d4b6ac409e9b0a34e69b635b6179cbe0c2d04b903e538049161c\"" Nov 5 23:56:22.291583 containerd[1536]: time="2025-11-05T23:56:22.291539041Z" level=info msg="CreateContainer within sandbox \"abe27e8a13e2d4b6ac409e9b0a34e69b635b6179cbe0c2d04b903e538049161c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 23:56:22.291841 containerd[1536]: time="2025-11-05T23:56:22.291738921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2696ce4f765504716ba789cd8040e7173378e94aea71063be7fcf7dcb3579ab4\"" Nov 5 23:56:22.293255 containerd[1536]: time="2025-11-05T23:56:22.293221761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15a91e4eb9a14c34bb44125e403ef821,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c276705f5b696c8c21c1344225c30fe0a160462f955b3dc1a1864d51418cd9e\"" Nov 5 23:56:22.296096 containerd[1536]: time="2025-11-05T23:56:22.296062401Z" level=info msg="CreateContainer within sandbox \"2696ce4f765504716ba789cd8040e7173378e94aea71063be7fcf7dcb3579ab4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 23:56:22.297819 containerd[1536]: time="2025-11-05T23:56:22.297745561Z" level=info msg="CreateContainer within sandbox \"2c276705f5b696c8c21c1344225c30fe0a160462f955b3dc1a1864d51418cd9e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 23:56:22.302839 containerd[1536]: time="2025-11-05T23:56:22.302806481Z" level=info msg="Container d4a5e55a07f9c49014fa658eb6010020231f480c9a46361998e5b89d37173442: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:22.306933 containerd[1536]: time="2025-11-05T23:56:22.306897521Z" level=info msg="Container ae101e224593a49478f7599e0d68aecba0f423cb12893c832dfb2fb60f833431: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:22.309852 containerd[1536]: time="2025-11-05T23:56:22.309821561Z" level=info msg="Container 6e97860846a753b42dd58ebf20a87f073ccad2213f5acc229ae5b2be25a0a7a9: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:22.312264 containerd[1536]: time="2025-11-05T23:56:22.312227961Z" level=info msg="CreateContainer within sandbox \"abe27e8a13e2d4b6ac409e9b0a34e69b635b6179cbe0c2d04b903e538049161c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d4a5e55a07f9c49014fa658eb6010020231f480c9a46361998e5b89d37173442\"" Nov 5 23:56:22.312850 containerd[1536]: time="2025-11-05T23:56:22.312822961Z" level=info msg="StartContainer for \"d4a5e55a07f9c49014fa658eb6010020231f480c9a46361998e5b89d37173442\"" Nov 5 23:56:22.314238 containerd[1536]: time="2025-11-05T23:56:22.314189441Z" level=info msg="connecting to shim d4a5e55a07f9c49014fa658eb6010020231f480c9a46361998e5b89d37173442" address="unix:///run/containerd/s/90064db5a0963e318a99623cfaa8fefa6ebcf57caece759bab537cff3de6cdb5" protocol=ttrpc version=3 Nov 5 23:56:22.317074 containerd[1536]: time="2025-11-05T23:56:22.317022441Z" level=info msg="CreateContainer within sandbox \"2696ce4f765504716ba789cd8040e7173378e94aea71063be7fcf7dcb3579ab4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ae101e224593a49478f7599e0d68aecba0f423cb12893c832dfb2fb60f833431\"" Nov 5 23:56:22.317524 containerd[1536]: time="2025-11-05T23:56:22.317492001Z" level=info msg="StartContainer for \"ae101e224593a49478f7599e0d68aecba0f423cb12893c832dfb2fb60f833431\"" Nov 5 23:56:22.318798 containerd[1536]: time="2025-11-05T23:56:22.318612041Z" level=info msg="connecting to shim ae101e224593a49478f7599e0d68aecba0f423cb12893c832dfb2fb60f833431" address="unix:///run/containerd/s/61d3cb9d9650527c30fb919ff156b1b240e5b726b22c974fe02fdbecb0c1381a" protocol=ttrpc version=3 Nov 5 23:56:22.324827 containerd[1536]: time="2025-11-05T23:56:22.324778601Z" level=info msg="CreateContainer within sandbox \"2c276705f5b696c8c21c1344225c30fe0a160462f955b3dc1a1864d51418cd9e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e97860846a753b42dd58ebf20a87f073ccad2213f5acc229ae5b2be25a0a7a9\"" Nov 5 23:56:22.325517 containerd[1536]: time="2025-11-05T23:56:22.325493561Z" level=info msg="StartContainer for \"6e97860846a753b42dd58ebf20a87f073ccad2213f5acc229ae5b2be25a0a7a9\"" Nov 5 23:56:22.326664 containerd[1536]: time="2025-11-05T23:56:22.326633441Z" level=info msg="connecting to shim 6e97860846a753b42dd58ebf20a87f073ccad2213f5acc229ae5b2be25a0a7a9" address="unix:///run/containerd/s/9333798d056553332328d3ad4b5b687e959ff4b082cd70f114a5cd306b756648" protocol=ttrpc version=3 Nov 5 23:56:22.336969 kubelet[2288]: E1105 23:56:22.336935 2288 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:56:22.340638 systemd[1]: Started cri-containerd-ae101e224593a49478f7599e0d68aecba0f423cb12893c832dfb2fb60f833431.scope - libcontainer container ae101e224593a49478f7599e0d68aecba0f423cb12893c832dfb2fb60f833431. Nov 5 23:56:22.341598 systemd[1]: Started cri-containerd-d4a5e55a07f9c49014fa658eb6010020231f480c9a46361998e5b89d37173442.scope - libcontainer container d4a5e55a07f9c49014fa658eb6010020231f480c9a46361998e5b89d37173442. Nov 5 23:56:22.346793 systemd[1]: Started cri-containerd-6e97860846a753b42dd58ebf20a87f073ccad2213f5acc229ae5b2be25a0a7a9.scope - libcontainer container 6e97860846a753b42dd58ebf20a87f073ccad2213f5acc229ae5b2be25a0a7a9. Nov 5 23:56:22.380654 kubelet[2288]: E1105 23:56:22.380615 2288 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:56:22.398659 containerd[1536]: time="2025-11-05T23:56:22.398154761Z" level=info msg="StartContainer for \"d4a5e55a07f9c49014fa658eb6010020231f480c9a46361998e5b89d37173442\" returns successfully" Nov 5 23:56:22.399395 containerd[1536]: time="2025-11-05T23:56:22.399183801Z" level=info msg="StartContainer for \"6e97860846a753b42dd58ebf20a87f073ccad2213f5acc229ae5b2be25a0a7a9\" returns successfully" Nov 5 23:56:22.401028 containerd[1536]: time="2025-11-05T23:56:22.400910161Z" level=info msg="StartContainer for \"ae101e224593a49478f7599e0d68aecba0f423cb12893c832dfb2fb60f833431\" returns successfully" Nov 5 23:56:22.775954 kubelet[2288]: I1105 23:56:22.775914 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:56:23.230305 kubelet[2288]: E1105 23:56:23.230271 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:23.232096 kubelet[2288]: E1105 23:56:23.231935 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:23.235049 kubelet[2288]: E1105 23:56:23.235023 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:23.965644 kubelet[2288]: I1105 23:56:23.965542 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 23:56:23.965644 kubelet[2288]: E1105 23:56:23.965579 2288 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 5 23:56:23.991035 kubelet[2288]: E1105 23:56:23.990855 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.092157 kubelet[2288]: E1105 23:56:24.092109 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.192921 kubelet[2288]: E1105 23:56:24.192882 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.236937 kubelet[2288]: E1105 23:56:24.236831 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:24.237759 kubelet[2288]: E1105 23:56:24.237410 2288 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 23:56:24.293389 kubelet[2288]: E1105 23:56:24.293355 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.393994 kubelet[2288]: E1105 23:56:24.393955 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.494987 kubelet[2288]: E1105 23:56:24.494749 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.595504 kubelet[2288]: E1105 23:56:24.595466 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.696542 kubelet[2288]: E1105 23:56:24.696508 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.797420 kubelet[2288]: E1105 23:56:24.797320 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.897454 kubelet[2288]: E1105 23:56:24.897393 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:24.998016 kubelet[2288]: E1105 23:56:24.997973 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:25.098812 kubelet[2288]: E1105 23:56:25.098758 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:25.199672 kubelet[2288]: E1105 23:56:25.199636 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:25.300250 kubelet[2288]: E1105 23:56:25.300206 2288 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:25.395608 kubelet[2288]: I1105 23:56:25.395352 2288 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:25.403231 kubelet[2288]: I1105 23:56:25.403193 2288 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:25.407208 kubelet[2288]: I1105 23:56:25.407178 2288 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 23:56:25.558484 kubelet[2288]: I1105 23:56:25.558331 2288 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:25.563167 kubelet[2288]: E1105 23:56:25.563122 2288 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.132516 systemd[1]: Reload requested from client PID 2574 ('systemctl') (unit session-7.scope)... Nov 5 23:56:26.132781 systemd[1]: Reloading... Nov 5 23:56:26.186895 kubelet[2288]: I1105 23:56:26.186853 2288 apiserver.go:52] "Watching apiserver" Nov 5 23:56:26.194453 kubelet[2288]: I1105 23:56:26.194392 2288 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 23:56:26.207510 zram_generator::config[2617]: No configuration found. Nov 5 23:56:26.376235 systemd[1]: Reloading finished in 243 ms. Nov 5 23:56:26.403925 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:56:26.414726 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 23:56:26.414948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:56:26.414996 systemd[1]: kubelet.service: Consumed 1.219s CPU time, 124.2M memory peak. Nov 5 23:56:26.417109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:56:26.562603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:56:26.566555 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:56:26.605664 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:56:26.605664 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:56:26.605664 kubelet[2659]: I1105 23:56:26.605482 2659 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:56:26.616815 kubelet[2659]: I1105 23:56:26.616503 2659 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 23:56:26.617214 kubelet[2659]: I1105 23:56:26.617194 2659 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:56:26.617576 kubelet[2659]: I1105 23:56:26.617474 2659 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 23:56:26.617840 kubelet[2659]: I1105 23:56:26.617739 2659 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:56:26.618599 kubelet[2659]: I1105 23:56:26.618133 2659 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:56:26.620335 kubelet[2659]: I1105 23:56:26.620263 2659 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 23:56:26.622895 kubelet[2659]: I1105 23:56:26.622860 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:56:26.629707 kubelet[2659]: I1105 23:56:26.628503 2659 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:56:26.633585 kubelet[2659]: I1105 23:56:26.633555 2659 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 23:56:26.633870 kubelet[2659]: I1105 23:56:26.633826 2659 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:56:26.634044 kubelet[2659]: I1105 23:56:26.633870 2659 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:56:26.634122 kubelet[2659]: I1105 23:56:26.634048 2659 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:56:26.634122 kubelet[2659]: I1105 23:56:26.634056 2659 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 23:56:26.634122 kubelet[2659]: I1105 23:56:26.634105 2659 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 23:56:26.635050 kubelet[2659]: I1105 23:56:26.635029 2659 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:56:26.635211 kubelet[2659]: I1105 23:56:26.635200 2659 kubelet.go:475] "Attempting to sync node with API server" Nov 5 23:56:26.635243 kubelet[2659]: I1105 23:56:26.635220 2659 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:56:26.635275 kubelet[2659]: I1105 23:56:26.635248 2659 kubelet.go:387] "Adding apiserver pod source" Nov 5 23:56:26.635275 kubelet[2659]: I1105 23:56:26.635259 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:56:26.638581 kubelet[2659]: I1105 23:56:26.637097 2659 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:56:26.638581 kubelet[2659]: I1105 23:56:26.637652 2659 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:56:26.638581 kubelet[2659]: I1105 23:56:26.637679 2659 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 23:56:26.642834 kubelet[2659]: I1105 23:56:26.642814 2659 server.go:1262] "Started kubelet" Nov 5 23:56:26.645208 kubelet[2659]: I1105 23:56:26.642984 2659 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:56:26.645208 kubelet[2659]: I1105 23:56:26.643538 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:56:26.647222 kubelet[2659]: I1105 23:56:26.643345 2659 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:56:26.647222 kubelet[2659]: I1105 23:56:26.645732 2659 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 23:56:26.647222 kubelet[2659]: I1105 23:56:26.645991 2659 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:56:26.647222 kubelet[2659]: I1105 23:56:26.646605 2659 server.go:310] "Adding debug handlers to kubelet server" Nov 5 23:56:26.650066 kubelet[2659]: I1105 23:56:26.649808 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:56:26.652626 kubelet[2659]: E1105 23:56:26.652595 2659 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:56:26.653371 kubelet[2659]: E1105 23:56:26.652733 2659 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 23:56:26.653371 kubelet[2659]: I1105 23:56:26.652759 2659 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 23:56:26.653371 kubelet[2659]: I1105 23:56:26.652911 2659 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 23:56:26.653371 kubelet[2659]: I1105 23:56:26.653016 2659 reconciler.go:29] "Reconciler: start to sync state" Nov 5 23:56:26.654069 kubelet[2659]: I1105 23:56:26.653932 2659 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:56:26.654069 kubelet[2659]: I1105 23:56:26.654029 2659 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:56:26.660732 kubelet[2659]: I1105 23:56:26.660658 2659 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:56:26.663076 kubelet[2659]: I1105 23:56:26.661954 2659 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 23:56:26.665288 kubelet[2659]: I1105 23:56:26.665252 2659 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 23:56:26.665541 kubelet[2659]: I1105 23:56:26.665520 2659 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 23:56:26.665582 kubelet[2659]: I1105 23:56:26.665557 2659 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 23:56:26.665699 kubelet[2659]: E1105 23:56:26.665668 2659 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:56:26.711847 kubelet[2659]: I1105 23:56:26.711819 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:56:26.711847 kubelet[2659]: I1105 23:56:26.711839 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:56:26.712004 kubelet[2659]: I1105 23:56:26.711874 2659 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:56:26.712056 kubelet[2659]: I1105 23:56:26.712039 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 23:56:26.712085 kubelet[2659]: I1105 23:56:26.712055 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 23:56:26.712085 kubelet[2659]: I1105 23:56:26.712073 2659 policy_none.go:49] "None policy: Start" Nov 5 23:56:26.712085 kubelet[2659]: I1105 23:56:26.712081 2659 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 23:56:26.712518 kubelet[2659]: I1105 23:56:26.712109 2659 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 23:56:26.712518 kubelet[2659]: I1105 23:56:26.712228 2659 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 5 23:56:26.712518 kubelet[2659]: I1105 23:56:26.712238 2659 policy_none.go:47] "Start" Nov 5 23:56:26.717826 kubelet[2659]: E1105 23:56:26.717801 2659 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:56:26.718246 kubelet[2659]: I1105 23:56:26.718228 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:56:26.718300 kubelet[2659]: I1105 23:56:26.718248 2659 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:56:26.718841 kubelet[2659]: I1105 23:56:26.718694 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:56:26.721757 kubelet[2659]: E1105 23:56:26.721727 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:56:26.767075 kubelet[2659]: I1105 23:56:26.767024 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:26.767179 kubelet[2659]: I1105 23:56:26.767083 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.767222 kubelet[2659]: I1105 23:56:26.767028 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 23:56:26.772396 kubelet[2659]: E1105 23:56:26.772345 2659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 23:56:26.772975 kubelet[2659]: E1105 23:56:26.772937 2659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.772975 kubelet[2659]: E1105 23:56:26.772941 2659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:26.820344 kubelet[2659]: I1105 23:56:26.820316 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 23:56:26.827448 kubelet[2659]: I1105 23:56:26.827394 2659 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 23:56:26.827538 kubelet[2659]: I1105 23:56:26.827492 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 23:56:26.954043 kubelet[2659]: I1105 23:56:26.953726 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15a91e4eb9a14c34bb44125e403ef821-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15a91e4eb9a14c34bb44125e403ef821\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:26.954043 kubelet[2659]: I1105 23:56:26.953771 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15a91e4eb9a14c34bb44125e403ef821-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15a91e4eb9a14c34bb44125e403ef821\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:26.954043 kubelet[2659]: I1105 23:56:26.953787 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.954043 kubelet[2659]: I1105 23:56:26.953802 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.954043 kubelet[2659]: I1105 23:56:26.953832 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.954221 kubelet[2659]: I1105 23:56:26.953851 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15a91e4eb9a14c34bb44125e403ef821-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15a91e4eb9a14c34bb44125e403ef821\") " pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:26.954221 kubelet[2659]: I1105 23:56:26.953884 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.954221 kubelet[2659]: I1105 23:56:26.953935 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 23:56:26.954221 kubelet[2659]: I1105 23:56:26.953969 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 23:56:27.636483 kubelet[2659]: I1105 23:56:27.636442 2659 apiserver.go:52] "Watching apiserver" Nov 5 23:56:27.653550 kubelet[2659]: I1105 23:56:27.653509 2659 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 23:56:27.697753 kubelet[2659]: I1105 23:56:27.697648 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 23:56:27.698266 kubelet[2659]: I1105 23:56:27.698189 2659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:27.704636 kubelet[2659]: E1105 23:56:27.704607 2659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 23:56:27.705115 kubelet[2659]: E1105 23:56:27.705095 2659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 23:56:27.716537 kubelet[2659]: I1105 23:56:27.716488 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.716476601 podStartE2EDuration="2.716476601s" podCreationTimestamp="2025-11-05 23:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:56:27.716220961 +0000 UTC m=+1.146113121" watchObservedRunningTime="2025-11-05 23:56:27.716476601 +0000 UTC m=+1.146368761" Nov 5 23:56:27.729904 kubelet[2659]: I1105 23:56:27.729724 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.729709641 podStartE2EDuration="2.729709641s" podCreationTimestamp="2025-11-05 23:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:56:27.728031121 +0000 UTC m=+1.157923281" watchObservedRunningTime="2025-11-05 23:56:27.729709641 +0000 UTC m=+1.159601801" Nov 5 23:56:27.759025 kubelet[2659]: I1105 23:56:27.758972 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.758955361 podStartE2EDuration="2.758955361s" podCreationTimestamp="2025-11-05 23:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:56:27.739378721 +0000 UTC m=+1.169270881" watchObservedRunningTime="2025-11-05 23:56:27.758955361 +0000 UTC m=+1.188847521" Nov 5 23:56:31.748338 kubelet[2659]: I1105 23:56:31.748293 2659 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 23:56:31.748691 containerd[1536]: time="2025-11-05T23:56:31.748631360Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 23:56:31.748877 kubelet[2659]: I1105 23:56:31.748815 2659 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 23:56:32.797763 systemd[1]: Created slice kubepods-besteffort-pod0e746f80_4848_49e6_b400_e03b87c9049f.slice - libcontainer container kubepods-besteffort-pod0e746f80_4848_49e6_b400_e03b87c9049f.slice. Nov 5 23:56:32.890144 kubelet[2659]: I1105 23:56:32.890099 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e746f80-4848-49e6-b400-e03b87c9049f-kube-proxy\") pod \"kube-proxy-5sn9g\" (UID: \"0e746f80-4848-49e6-b400-e03b87c9049f\") " pod="kube-system/kube-proxy-5sn9g" Nov 5 23:56:32.890144 kubelet[2659]: I1105 23:56:32.890149 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e746f80-4848-49e6-b400-e03b87c9049f-xtables-lock\") pod \"kube-proxy-5sn9g\" (UID: \"0e746f80-4848-49e6-b400-e03b87c9049f\") " pod="kube-system/kube-proxy-5sn9g" Nov 5 23:56:32.890536 kubelet[2659]: I1105 23:56:32.890167 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e746f80-4848-49e6-b400-e03b87c9049f-lib-modules\") pod \"kube-proxy-5sn9g\" (UID: \"0e746f80-4848-49e6-b400-e03b87c9049f\") " pod="kube-system/kube-proxy-5sn9g" Nov 5 23:56:32.890536 kubelet[2659]: I1105 23:56:32.890189 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bkmq\" (UniqueName: \"kubernetes.io/projected/0e746f80-4848-49e6-b400-e03b87c9049f-kube-api-access-7bkmq\") pod \"kube-proxy-5sn9g\" (UID: \"0e746f80-4848-49e6-b400-e03b87c9049f\") " pod="kube-system/kube-proxy-5sn9g" Nov 5 23:56:32.905395 systemd[1]: Created slice kubepods-besteffort-pod59203861_fe32_4cc0_9522_6e898d09a01b.slice - libcontainer container kubepods-besteffort-pod59203861_fe32_4cc0_9522_6e898d09a01b.slice. Nov 5 23:56:32.990656 kubelet[2659]: I1105 23:56:32.990617 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59203861-fe32-4cc0-9522-6e898d09a01b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-8pjsm\" (UID: \"59203861-fe32-4cc0-9522-6e898d09a01b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8pjsm" Nov 5 23:56:32.990656 kubelet[2659]: I1105 23:56:32.990656 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wnwb\" (UniqueName: \"kubernetes.io/projected/59203861-fe32-4cc0-9522-6e898d09a01b-kube-api-access-4wnwb\") pod \"tigera-operator-65cdcdfd6d-8pjsm\" (UID: \"59203861-fe32-4cc0-9522-6e898d09a01b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8pjsm" Nov 5 23:56:33.118267 containerd[1536]: time="2025-11-05T23:56:33.118227532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5sn9g,Uid:0e746f80-4848-49e6-b400-e03b87c9049f,Namespace:kube-system,Attempt:0,}" Nov 5 23:56:33.133890 containerd[1536]: time="2025-11-05T23:56:33.133848750Z" level=info msg="connecting to shim 0c473a8e09e6ce0f47ede1187e5ebd543eb98227f3097063a40ff608f627cd86" address="unix:///run/containerd/s/054d742423d609a5fcdd54a6db7940a619d595d9080912c8a2c582aaa9c6746c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:56:33.156639 systemd[1]: Started cri-containerd-0c473a8e09e6ce0f47ede1187e5ebd543eb98227f3097063a40ff608f627cd86.scope - libcontainer container 0c473a8e09e6ce0f47ede1187e5ebd543eb98227f3097063a40ff608f627cd86. Nov 5 23:56:33.177293 containerd[1536]: time="2025-11-05T23:56:33.177241753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5sn9g,Uid:0e746f80-4848-49e6-b400-e03b87c9049f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c473a8e09e6ce0f47ede1187e5ebd543eb98227f3097063a40ff608f627cd86\"" Nov 5 23:56:33.183731 containerd[1536]: time="2025-11-05T23:56:33.183667626Z" level=info msg="CreateContainer within sandbox \"0c473a8e09e6ce0f47ede1187e5ebd543eb98227f3097063a40ff608f627cd86\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 23:56:33.194295 containerd[1536]: time="2025-11-05T23:56:33.194245766Z" level=info msg="Container 8cf93e7147c0292240997695269eddbdcb8c1c66d3599efc04f2b3b472b2d663: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:33.201290 containerd[1536]: time="2025-11-05T23:56:33.201245221Z" level=info msg="CreateContainer within sandbox \"0c473a8e09e6ce0f47ede1187e5ebd543eb98227f3097063a40ff608f627cd86\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8cf93e7147c0292240997695269eddbdcb8c1c66d3599efc04f2b3b472b2d663\"" Nov 5 23:56:33.201836 containerd[1536]: time="2025-11-05T23:56:33.201812642Z" level=info msg="StartContainer for \"8cf93e7147c0292240997695269eddbdcb8c1c66d3599efc04f2b3b472b2d663\"" Nov 5 23:56:33.203167 containerd[1536]: time="2025-11-05T23:56:33.203141520Z" level=info msg="connecting to shim 8cf93e7147c0292240997695269eddbdcb8c1c66d3599efc04f2b3b472b2d663" address="unix:///run/containerd/s/054d742423d609a5fcdd54a6db7940a619d595d9080912c8a2c582aaa9c6746c" protocol=ttrpc version=3 Nov 5 23:56:33.210930 containerd[1536]: time="2025-11-05T23:56:33.210894190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8pjsm,Uid:59203861-fe32-4cc0-9522-6e898d09a01b,Namespace:tigera-operator,Attempt:0,}" Nov 5 23:56:33.224624 systemd[1]: Started cri-containerd-8cf93e7147c0292240997695269eddbdcb8c1c66d3599efc04f2b3b472b2d663.scope - libcontainer container 8cf93e7147c0292240997695269eddbdcb8c1c66d3599efc04f2b3b472b2d663. Nov 5 23:56:33.231818 containerd[1536]: time="2025-11-05T23:56:33.231669402Z" level=info msg="connecting to shim 98ef87b722a5a59d52924362f90094fb0ec78aadd09df9ea0ab35a5e7aa4f818" address="unix:///run/containerd/s/375079efeb27420b0d5fae9e83bd74932f9f272c9cbffbf4178835b3e4ba8a80" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:56:33.255642 systemd[1]: Started cri-containerd-98ef87b722a5a59d52924362f90094fb0ec78aadd09df9ea0ab35a5e7aa4f818.scope - libcontainer container 98ef87b722a5a59d52924362f90094fb0ec78aadd09df9ea0ab35a5e7aa4f818. Nov 5 23:56:33.276663 containerd[1536]: time="2025-11-05T23:56:33.276623475Z" level=info msg="StartContainer for \"8cf93e7147c0292240997695269eddbdcb8c1c66d3599efc04f2b3b472b2d663\" returns successfully" Nov 5 23:56:33.298881 containerd[1536]: time="2025-11-05T23:56:33.298834480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8pjsm,Uid:59203861-fe32-4cc0-9522-6e898d09a01b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"98ef87b722a5a59d52924362f90094fb0ec78aadd09df9ea0ab35a5e7aa4f818\"" Nov 5 23:56:33.302969 containerd[1536]: time="2025-11-05T23:56:33.302903909Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 23:56:34.872688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750454872.mount: Deactivated successfully. Nov 5 23:56:35.355859 containerd[1536]: time="2025-11-05T23:56:35.355809277Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:35.356932 containerd[1536]: time="2025-11-05T23:56:35.356860608Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 23:56:35.358266 containerd[1536]: time="2025-11-05T23:56:35.358228609Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:35.361269 containerd[1536]: time="2025-11-05T23:56:35.361221604Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:35.362598 containerd[1536]: time="2025-11-05T23:56:35.362565446Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.05957806s" Nov 5 23:56:35.362650 containerd[1536]: time="2025-11-05T23:56:35.362612125Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 23:56:35.389765 containerd[1536]: time="2025-11-05T23:56:35.389722318Z" level=info msg="CreateContainer within sandbox \"98ef87b722a5a59d52924362f90094fb0ec78aadd09df9ea0ab35a5e7aa4f818\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 23:56:35.405135 containerd[1536]: time="2025-11-05T23:56:35.402421239Z" level=info msg="Container 34a1b279374818fc9c131fd6563d975fbcc9effe0709cc87c9fcf3dbc83811cc: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:35.411186 containerd[1536]: time="2025-11-05T23:56:35.411106753Z" level=info msg="CreateContainer within sandbox \"98ef87b722a5a59d52924362f90094fb0ec78aadd09df9ea0ab35a5e7aa4f818\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34a1b279374818fc9c131fd6563d975fbcc9effe0709cc87c9fcf3dbc83811cc\"" Nov 5 23:56:35.411670 containerd[1536]: time="2025-11-05T23:56:35.411641458Z" level=info msg="StartContainer for \"34a1b279374818fc9c131fd6563d975fbcc9effe0709cc87c9fcf3dbc83811cc\"" Nov 5 23:56:35.413972 containerd[1536]: time="2025-11-05T23:56:35.413944993Z" level=info msg="connecting to shim 34a1b279374818fc9c131fd6563d975fbcc9effe0709cc87c9fcf3dbc83811cc" address="unix:///run/containerd/s/375079efeb27420b0d5fae9e83bd74932f9f272c9cbffbf4178835b3e4ba8a80" protocol=ttrpc version=3 Nov 5 23:56:35.440925 systemd[1]: Started cri-containerd-34a1b279374818fc9c131fd6563d975fbcc9effe0709cc87c9fcf3dbc83811cc.scope - libcontainer container 34a1b279374818fc9c131fd6563d975fbcc9effe0709cc87c9fcf3dbc83811cc. Nov 5 23:56:35.469052 containerd[1536]: time="2025-11-05T23:56:35.468834840Z" level=info msg="StartContainer for \"34a1b279374818fc9c131fd6563d975fbcc9effe0709cc87c9fcf3dbc83811cc\" returns successfully" Nov 5 23:56:35.732871 kubelet[2659]: I1105 23:56:35.732729 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5sn9g" podStartSLOduration=3.726382036 podStartE2EDuration="3.726382036s" podCreationTimestamp="2025-11-05 23:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:56:33.723140225 +0000 UTC m=+7.153032385" watchObservedRunningTime="2025-11-05 23:56:35.726382036 +0000 UTC m=+9.156274156" Nov 5 23:56:35.878382 kubelet[2659]: I1105 23:56:35.878315 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-8pjsm" podStartSLOduration=1.81676154 podStartE2EDuration="3.878297979s" podCreationTimestamp="2025-11-05 23:56:32 +0000 UTC" firstStartedPulling="2025-11-05 23:56:33.301770626 +0000 UTC m=+6.731662746" lastFinishedPulling="2025-11-05 23:56:35.363307025 +0000 UTC m=+8.793199185" observedRunningTime="2025-11-05 23:56:35.726381556 +0000 UTC m=+9.156273716" watchObservedRunningTime="2025-11-05 23:56:35.878297979 +0000 UTC m=+9.308190139" Nov 5 23:56:41.015053 sudo[1743]: pam_unix(sudo:session): session closed for user root Nov 5 23:56:41.020239 sshd[1742]: Connection closed by 10.0.0.1 port 58192 Nov 5 23:56:41.020155 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Nov 5 23:56:41.026331 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:58192.service: Deactivated successfully. Nov 5 23:56:41.030635 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 23:56:41.031003 systemd[1]: session-7.scope: Consumed 5.671s CPU time, 224.7M memory peak. Nov 5 23:56:41.033203 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Nov 5 23:56:41.035388 systemd-logind[1514]: Removed session 7. Nov 5 23:56:43.040532 update_engine[1523]: I20251105 23:56:43.040460 1523 update_attempter.cc:509] Updating boot flags... Nov 5 23:56:49.640307 systemd[1]: Created slice kubepods-besteffort-pod80121a1d_7335_4869_83ce_e9674066dc75.slice - libcontainer container kubepods-besteffort-pod80121a1d_7335_4869_83ce_e9674066dc75.slice. Nov 5 23:56:49.703812 kubelet[2659]: I1105 23:56:49.703761 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpp2\" (UniqueName: \"kubernetes.io/projected/80121a1d-7335-4869-83ce-e9674066dc75-kube-api-access-9lpp2\") pod \"calico-typha-579bfcb644-qtgn2\" (UID: \"80121a1d-7335-4869-83ce-e9674066dc75\") " pod="calico-system/calico-typha-579bfcb644-qtgn2" Nov 5 23:56:49.703812 kubelet[2659]: I1105 23:56:49.703820 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/80121a1d-7335-4869-83ce-e9674066dc75-typha-certs\") pod \"calico-typha-579bfcb644-qtgn2\" (UID: \"80121a1d-7335-4869-83ce-e9674066dc75\") " pod="calico-system/calico-typha-579bfcb644-qtgn2" Nov 5 23:56:49.704172 kubelet[2659]: I1105 23:56:49.703888 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80121a1d-7335-4869-83ce-e9674066dc75-tigera-ca-bundle\") pod \"calico-typha-579bfcb644-qtgn2\" (UID: \"80121a1d-7335-4869-83ce-e9674066dc75\") " pod="calico-system/calico-typha-579bfcb644-qtgn2" Nov 5 23:56:49.833472 systemd[1]: Created slice kubepods-besteffort-podd9e7772e_c0b3_4a81_8e4d_c63cb5700a44.slice - libcontainer container kubepods-besteffort-podd9e7772e_c0b3_4a81_8e4d_c63cb5700a44.slice. Nov 5 23:56:49.906020 kubelet[2659]: I1105 23:56:49.905612 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-node-certs\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906020 kubelet[2659]: I1105 23:56:49.905674 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-policysync\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906020 kubelet[2659]: I1105 23:56:49.905693 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-var-lib-calico\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906020 kubelet[2659]: I1105 23:56:49.905707 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-xtables-lock\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906020 kubelet[2659]: I1105 23:56:49.905722 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-flexvol-driver-host\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906325 kubelet[2659]: I1105 23:56:49.905735 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-lib-modules\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906325 kubelet[2659]: I1105 23:56:49.905748 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-var-run-calico\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906325 kubelet[2659]: I1105 23:56:49.905762 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-tigera-ca-bundle\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906325 kubelet[2659]: I1105 23:56:49.905778 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-cni-bin-dir\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906325 kubelet[2659]: I1105 23:56:49.905790 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-cni-net-dir\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906421 kubelet[2659]: I1105 23:56:49.905806 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-cni-log-dir\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.906421 kubelet[2659]: I1105 23:56:49.905819 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhg9l\" (UniqueName: \"kubernetes.io/projected/d9e7772e-c0b3-4a81-8e4d-c63cb5700a44-kube-api-access-rhg9l\") pod \"calico-node-w75xx\" (UID: \"d9e7772e-c0b3-4a81-8e4d-c63cb5700a44\") " pod="calico-system/calico-node-w75xx" Nov 5 23:56:49.944688 containerd[1536]: time="2025-11-05T23:56:49.944617898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-579bfcb644-qtgn2,Uid:80121a1d-7335-4869-83ce-e9674066dc75,Namespace:calico-system,Attempt:0,}" Nov 5 23:56:49.976236 containerd[1536]: time="2025-11-05T23:56:49.976166777Z" level=info msg="connecting to shim 32a4f8e92ddaba1f60779112b0025815121082bcfbf5607fd9a19834dd9305e4" address="unix:///run/containerd/s/ad4670879970c4488477cfc26e9ee6332ac4e3764eb550eb52bf01e6b2b3c32b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:56:50.014025 kubelet[2659]: E1105 23:56:50.013714 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.014025 kubelet[2659]: W1105 23:56:50.013735 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.014025 kubelet[2659]: E1105 23:56:50.013757 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.020113 kubelet[2659]: E1105 23:56:50.020075 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:56:50.028785 kubelet[2659]: E1105 23:56:50.028764 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.028947 kubelet[2659]: W1105 23:56:50.028894 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.028947 kubelet[2659]: E1105 23:56:50.028917 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.049608 systemd[1]: Started cri-containerd-32a4f8e92ddaba1f60779112b0025815121082bcfbf5607fd9a19834dd9305e4.scope - libcontainer container 32a4f8e92ddaba1f60779112b0025815121082bcfbf5607fd9a19834dd9305e4. Nov 5 23:56:50.076754 containerd[1536]: time="2025-11-05T23:56:50.076721559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-579bfcb644-qtgn2,Uid:80121a1d-7335-4869-83ce-e9674066dc75,Namespace:calico-system,Attempt:0,} returns sandbox id \"32a4f8e92ddaba1f60779112b0025815121082bcfbf5607fd9a19834dd9305e4\"" Nov 5 23:56:50.078809 containerd[1536]: time="2025-11-05T23:56:50.078781217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 23:56:50.101858 kubelet[2659]: E1105 23:56:50.101739 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.101858 kubelet[2659]: W1105 23:56:50.101797 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.101858 kubelet[2659]: E1105 23:56:50.101820 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.102340 kubelet[2659]: E1105 23:56:50.102279 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.102401 kubelet[2659]: W1105 23:56:50.102294 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.102499 kubelet[2659]: E1105 23:56:50.102476 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.103054 kubelet[2659]: E1105 23:56:50.103027 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.103139 kubelet[2659]: W1105 23:56:50.103113 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.103197 kubelet[2659]: E1105 23:56:50.103186 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.103534 kubelet[2659]: E1105 23:56:50.103510 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.103626 kubelet[2659]: W1105 23:56:50.103613 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.103826 kubelet[2659]: E1105 23:56:50.103755 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.104474 kubelet[2659]: E1105 23:56:50.104379 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.104562 kubelet[2659]: W1105 23:56:50.104535 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.104735 kubelet[2659]: E1105 23:56:50.104627 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.105024 kubelet[2659]: E1105 23:56:50.105011 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.105093 kubelet[2659]: W1105 23:56:50.105082 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.105175 kubelet[2659]: E1105 23:56:50.105134 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.106039 kubelet[2659]: E1105 23:56:50.105956 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.106039 kubelet[2659]: W1105 23:56:50.105972 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.106039 kubelet[2659]: E1105 23:56:50.105984 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.106883 kubelet[2659]: E1105 23:56:50.106866 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.107654 kubelet[2659]: W1105 23:56:50.106957 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.107654 kubelet[2659]: E1105 23:56:50.106974 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.108474 kubelet[2659]: E1105 23:56:50.107988 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.108474 kubelet[2659]: W1105 23:56:50.108091 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.108474 kubelet[2659]: E1105 23:56:50.108107 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.108973 kubelet[2659]: E1105 23:56:50.108947 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.108973 kubelet[2659]: W1105 23:56:50.108962 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.108973 kubelet[2659]: E1105 23:56:50.108974 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.109546 kubelet[2659]: E1105 23:56:50.109524 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.109585 kubelet[2659]: W1105 23:56:50.109539 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.109585 kubelet[2659]: E1105 23:56:50.109561 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.109741 kubelet[2659]: E1105 23:56:50.109726 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.109741 kubelet[2659]: W1105 23:56:50.109740 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.109786 kubelet[2659]: E1105 23:56:50.109750 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.110786 kubelet[2659]: E1105 23:56:50.110769 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.110835 kubelet[2659]: W1105 23:56:50.110784 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.110859 kubelet[2659]: E1105 23:56:50.110839 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.111076 kubelet[2659]: E1105 23:56:50.111057 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.111076 kubelet[2659]: W1105 23:56:50.111074 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.111128 kubelet[2659]: E1105 23:56:50.111085 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.111275 kubelet[2659]: E1105 23:56:50.111249 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.111275 kubelet[2659]: W1105 23:56:50.111273 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.111319 kubelet[2659]: E1105 23:56:50.111285 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.111445 kubelet[2659]: E1105 23:56:50.111414 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.111475 kubelet[2659]: W1105 23:56:50.111446 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.111475 kubelet[2659]: E1105 23:56:50.111455 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.111648 kubelet[2659]: E1105 23:56:50.111635 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.111648 kubelet[2659]: W1105 23:56:50.111646 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.111693 kubelet[2659]: E1105 23:56:50.111655 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.111966 kubelet[2659]: E1105 23:56:50.111951 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.111966 kubelet[2659]: W1105 23:56:50.111965 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.112011 kubelet[2659]: E1105 23:56:50.111976 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.112274 kubelet[2659]: E1105 23:56:50.112198 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.112274 kubelet[2659]: W1105 23:56:50.112211 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.112274 kubelet[2659]: E1105 23:56:50.112220 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.112505 kubelet[2659]: E1105 23:56:50.112489 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.112505 kubelet[2659]: W1105 23:56:50.112505 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.112602 kubelet[2659]: E1105 23:56:50.112516 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.112781 kubelet[2659]: E1105 23:56:50.112768 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.112781 kubelet[2659]: W1105 23:56:50.112780 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.112841 kubelet[2659]: E1105 23:56:50.112790 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.112841 kubelet[2659]: I1105 23:56:50.112817 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e0e0ade-490b-4bff-b3bc-5b351134410a-kubelet-dir\") pod \"csi-node-driver-lbl86\" (UID: \"7e0e0ade-490b-4bff-b3bc-5b351134410a\") " pod="calico-system/csi-node-driver-lbl86" Nov 5 23:56:50.112998 kubelet[2659]: E1105 23:56:50.112984 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.112998 kubelet[2659]: W1105 23:56:50.112995 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.113048 kubelet[2659]: E1105 23:56:50.113004 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.113078 kubelet[2659]: I1105 23:56:50.113065 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7e0e0ade-490b-4bff-b3bc-5b351134410a-varrun\") pod \"csi-node-driver-lbl86\" (UID: \"7e0e0ade-490b-4bff-b3bc-5b351134410a\") " pod="calico-system/csi-node-driver-lbl86" Nov 5 23:56:50.113256 kubelet[2659]: E1105 23:56:50.113242 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.113256 kubelet[2659]: W1105 23:56:50.113255 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.113307 kubelet[2659]: E1105 23:56:50.113264 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.113307 kubelet[2659]: I1105 23:56:50.113286 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e0e0ade-490b-4bff-b3bc-5b351134410a-registration-dir\") pod \"csi-node-driver-lbl86\" (UID: \"7e0e0ade-490b-4bff-b3bc-5b351134410a\") " pod="calico-system/csi-node-driver-lbl86" Nov 5 23:56:50.113579 kubelet[2659]: E1105 23:56:50.113564 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.113579 kubelet[2659]: W1105 23:56:50.113576 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.113653 kubelet[2659]: E1105 23:56:50.113585 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.113653 kubelet[2659]: I1105 23:56:50.113602 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e0e0ade-490b-4bff-b3bc-5b351134410a-socket-dir\") pod \"csi-node-driver-lbl86\" (UID: \"7e0e0ade-490b-4bff-b3bc-5b351134410a\") " pod="calico-system/csi-node-driver-lbl86" Nov 5 23:56:50.113802 kubelet[2659]: E1105 23:56:50.113771 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.113802 kubelet[2659]: W1105 23:56:50.113799 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.113855 kubelet[2659]: E1105 23:56:50.113809 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.113855 kubelet[2659]: I1105 23:56:50.113832 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbpg7\" (UniqueName: \"kubernetes.io/projected/7e0e0ade-490b-4bff-b3bc-5b351134410a-kube-api-access-xbpg7\") pod \"csi-node-driver-lbl86\" (UID: \"7e0e0ade-490b-4bff-b3bc-5b351134410a\") " pod="calico-system/csi-node-driver-lbl86" Nov 5 23:56:50.114027 kubelet[2659]: E1105 23:56:50.114014 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.114027 kubelet[2659]: W1105 23:56:50.114026 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.114087 kubelet[2659]: E1105 23:56:50.114035 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.114179 kubelet[2659]: E1105 23:56:50.114167 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.114179 kubelet[2659]: W1105 23:56:50.114177 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.114246 kubelet[2659]: E1105 23:56:50.114186 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.114560 kubelet[2659]: E1105 23:56:50.114506 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.114560 kubelet[2659]: W1105 23:56:50.114516 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.114560 kubelet[2659]: E1105 23:56:50.114524 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.114786 kubelet[2659]: E1105 23:56:50.114674 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.114786 kubelet[2659]: W1105 23:56:50.114707 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.114786 kubelet[2659]: E1105 23:56:50.114718 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.114998 kubelet[2659]: E1105 23:56:50.114929 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.114998 kubelet[2659]: W1105 23:56:50.114942 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.114998 kubelet[2659]: E1105 23:56:50.114952 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.115628 kubelet[2659]: E1105 23:56:50.115593 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.115628 kubelet[2659]: W1105 23:56:50.115610 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.115763 kubelet[2659]: E1105 23:56:50.115642 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.116015 kubelet[2659]: E1105 23:56:50.116001 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.116015 kubelet[2659]: W1105 23:56:50.116014 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.116105 kubelet[2659]: E1105 23:56:50.116026 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.116463 kubelet[2659]: E1105 23:56:50.116426 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.116463 kubelet[2659]: W1105 23:56:50.116449 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.116463 kubelet[2659]: E1105 23:56:50.116460 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.116672 kubelet[2659]: E1105 23:56:50.116656 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.116672 kubelet[2659]: W1105 23:56:50.116667 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.116862 kubelet[2659]: E1105 23:56:50.116805 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.117003 kubelet[2659]: E1105 23:56:50.116990 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.117003 kubelet[2659]: W1105 23:56:50.117002 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.117058 kubelet[2659]: E1105 23:56:50.117012 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.139567 containerd[1536]: time="2025-11-05T23:56:50.139425725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w75xx,Uid:d9e7772e-c0b3-4a81-8e4d-c63cb5700a44,Namespace:calico-system,Attempt:0,}" Nov 5 23:56:50.158622 containerd[1536]: time="2025-11-05T23:56:50.158508560Z" level=info msg="connecting to shim ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309" address="unix:///run/containerd/s/de0b555b07bb367cf479cd9b992d77752bee4b3294b16effd2f9e6b715e8bb7d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:56:50.188674 systemd[1]: Started cri-containerd-ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309.scope - libcontainer container ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309. Nov 5 23:56:50.214777 kubelet[2659]: E1105 23:56:50.214747 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.214777 kubelet[2659]: W1105 23:56:50.214770 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.214966 kubelet[2659]: E1105 23:56:50.214789 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.215498 kubelet[2659]: E1105 23:56:50.215484 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.215544 kubelet[2659]: W1105 23:56:50.215501 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.215544 kubelet[2659]: E1105 23:56:50.215513 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.215709 kubelet[2659]: E1105 23:56:50.215697 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.215736 kubelet[2659]: W1105 23:56:50.215709 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.215736 kubelet[2659]: E1105 23:56:50.215719 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.215915 kubelet[2659]: E1105 23:56:50.215896 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.215963 kubelet[2659]: W1105 23:56:50.215914 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.215963 kubelet[2659]: E1105 23:56:50.215931 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.216084 kubelet[2659]: E1105 23:56:50.216074 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.216084 kubelet[2659]: W1105 23:56:50.216084 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.216151 kubelet[2659]: E1105 23:56:50.216092 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.216252 kubelet[2659]: E1105 23:56:50.216239 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.216252 kubelet[2659]: W1105 23:56:50.216250 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.216316 kubelet[2659]: E1105 23:56:50.216258 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.216454 kubelet[2659]: E1105 23:56:50.216440 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.216454 kubelet[2659]: W1105 23:56:50.216453 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.216504 kubelet[2659]: E1105 23:56:50.216463 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.216735 kubelet[2659]: E1105 23:56:50.216721 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.216771 kubelet[2659]: W1105 23:56:50.216735 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.216771 kubelet[2659]: E1105 23:56:50.216747 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.216932 kubelet[2659]: E1105 23:56:50.216919 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.216932 kubelet[2659]: W1105 23:56:50.216930 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.217013 kubelet[2659]: E1105 23:56:50.216940 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.217103 kubelet[2659]: E1105 23:56:50.217091 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.217103 kubelet[2659]: W1105 23:56:50.217101 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.217103 kubelet[2659]: E1105 23:56:50.217109 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.217416 kubelet[2659]: E1105 23:56:50.217396 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.217416 kubelet[2659]: W1105 23:56:50.217412 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.217480 kubelet[2659]: E1105 23:56:50.217423 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.218639 kubelet[2659]: E1105 23:56:50.218534 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.218639 kubelet[2659]: W1105 23:56:50.218573 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.218639 kubelet[2659]: E1105 23:56:50.218588 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.219009 kubelet[2659]: E1105 23:56:50.218991 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.219009 kubelet[2659]: W1105 23:56:50.219008 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.219063 kubelet[2659]: E1105 23:56:50.219022 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.219278 kubelet[2659]: E1105 23:56:50.219264 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.219278 kubelet[2659]: W1105 23:56:50.219278 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.219353 kubelet[2659]: E1105 23:56:50.219331 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.219819 kubelet[2659]: E1105 23:56:50.219802 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.219858 kubelet[2659]: W1105 23:56:50.219818 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.219858 kubelet[2659]: E1105 23:56:50.219838 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.220569 kubelet[2659]: E1105 23:56:50.220555 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.220569 kubelet[2659]: W1105 23:56:50.220568 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.220625 kubelet[2659]: E1105 23:56:50.220578 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.220822 kubelet[2659]: E1105 23:56:50.220809 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.220822 kubelet[2659]: W1105 23:56:50.220821 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.220905 kubelet[2659]: E1105 23:56:50.220830 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.221261 kubelet[2659]: E1105 23:56:50.221245 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.221300 kubelet[2659]: W1105 23:56:50.221261 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.221300 kubelet[2659]: E1105 23:56:50.221275 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.221499 kubelet[2659]: E1105 23:56:50.221487 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.221499 kubelet[2659]: W1105 23:56:50.221499 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.221582 kubelet[2659]: E1105 23:56:50.221508 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.221718 kubelet[2659]: E1105 23:56:50.221707 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.221718 kubelet[2659]: W1105 23:56:50.221717 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.221783 kubelet[2659]: E1105 23:56:50.221726 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.221977 kubelet[2659]: E1105 23:56:50.221950 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.221977 kubelet[2659]: W1105 23:56:50.221962 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.221977 kubelet[2659]: E1105 23:56:50.221972 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.222226 kubelet[2659]: E1105 23:56:50.222205 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.222226 kubelet[2659]: W1105 23:56:50.222218 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.222284 kubelet[2659]: E1105 23:56:50.222229 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.222577 kubelet[2659]: E1105 23:56:50.222563 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.222676 kubelet[2659]: W1105 23:56:50.222662 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.222745 kubelet[2659]: E1105 23:56:50.222734 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.223197 kubelet[2659]: E1105 23:56:50.223136 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.223197 kubelet[2659]: W1105 23:56:50.223151 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.223197 kubelet[2659]: E1105 23:56:50.223164 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.223501 kubelet[2659]: E1105 23:56:50.223484 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.223571 kubelet[2659]: W1105 23:56:50.223504 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.223571 kubelet[2659]: E1105 23:56:50.223519 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.229329 kubelet[2659]: E1105 23:56:50.229311 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:50.229329 kubelet[2659]: W1105 23:56:50.229326 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:50.229411 kubelet[2659]: E1105 23:56:50.229339 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:50.254872 containerd[1536]: time="2025-11-05T23:56:50.254813366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w75xx,Uid:d9e7772e-c0b3-4a81-8e4d-c63cb5700a44,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309\"" Nov 5 23:56:50.956760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753580585.mount: Deactivated successfully. Nov 5 23:56:51.666129 kubelet[2659]: E1105 23:56:51.666093 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:56:52.606284 containerd[1536]: time="2025-11-05T23:56:52.606224084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:52.606784 containerd[1536]: time="2025-11-05T23:56:52.606737999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 23:56:52.607658 containerd[1536]: time="2025-11-05T23:56:52.607618591Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:52.609415 containerd[1536]: time="2025-11-05T23:56:52.609376535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:52.610265 containerd[1536]: time="2025-11-05T23:56:52.609853210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.531037753s" Nov 5 23:56:52.610265 containerd[1536]: time="2025-11-05T23:56:52.609885330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 23:56:52.611214 containerd[1536]: time="2025-11-05T23:56:52.611184637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 23:56:52.634966 containerd[1536]: time="2025-11-05T23:56:52.634901973Z" level=info msg="CreateContainer within sandbox \"32a4f8e92ddaba1f60779112b0025815121082bcfbf5607fd9a19834dd9305e4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 23:56:52.646796 containerd[1536]: time="2025-11-05T23:56:52.645921509Z" level=info msg="Container 0a4869c80beada60ba2a8964734ad94dcf4104f82694a5cb0cd39f041b44ae45: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:52.653804 containerd[1536]: time="2025-11-05T23:56:52.653753276Z" level=info msg="CreateContainer within sandbox \"32a4f8e92ddaba1f60779112b0025815121082bcfbf5607fd9a19834dd9305e4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0a4869c80beada60ba2a8964734ad94dcf4104f82694a5cb0cd39f041b44ae45\"" Nov 5 23:56:52.654421 containerd[1536]: time="2025-11-05T23:56:52.654353390Z" level=info msg="StartContainer for \"0a4869c80beada60ba2a8964734ad94dcf4104f82694a5cb0cd39f041b44ae45\"" Nov 5 23:56:52.655380 containerd[1536]: time="2025-11-05T23:56:52.655355300Z" level=info msg="connecting to shim 0a4869c80beada60ba2a8964734ad94dcf4104f82694a5cb0cd39f041b44ae45" address="unix:///run/containerd/s/ad4670879970c4488477cfc26e9ee6332ac4e3764eb550eb52bf01e6b2b3c32b" protocol=ttrpc version=3 Nov 5 23:56:52.678615 systemd[1]: Started cri-containerd-0a4869c80beada60ba2a8964734ad94dcf4104f82694a5cb0cd39f041b44ae45.scope - libcontainer container 0a4869c80beada60ba2a8964734ad94dcf4104f82694a5cb0cd39f041b44ae45. Nov 5 23:56:52.714572 containerd[1536]: time="2025-11-05T23:56:52.714514422Z" level=info msg="StartContainer for \"0a4869c80beada60ba2a8964734ad94dcf4104f82694a5cb0cd39f041b44ae45\" returns successfully" Nov 5 23:56:52.775258 kubelet[2659]: I1105 23:56:52.775199 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-579bfcb644-qtgn2" podStartSLOduration=1.242323834 podStartE2EDuration="3.775183009s" podCreationTimestamp="2025-11-05 23:56:49 +0000 UTC" firstStartedPulling="2025-11-05 23:56:50.078144104 +0000 UTC m=+23.508036264" lastFinishedPulling="2025-11-05 23:56:52.611003279 +0000 UTC m=+26.040895439" observedRunningTime="2025-11-05 23:56:52.774721213 +0000 UTC m=+26.204613373" watchObservedRunningTime="2025-11-05 23:56:52.775183009 +0000 UTC m=+26.205075169" Nov 5 23:56:52.831564 kubelet[2659]: E1105 23:56:52.831529 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.831564 kubelet[2659]: W1105 23:56:52.831554 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.831723 kubelet[2659]: E1105 23:56:52.831578 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.831723 kubelet[2659]: E1105 23:56:52.831775 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.831723 kubelet[2659]: W1105 23:56:52.831785 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.831723 kubelet[2659]: E1105 23:56:52.831829 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.831723 kubelet[2659]: E1105 23:56:52.831982 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.831723 kubelet[2659]: W1105 23:56:52.831991 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.831723 kubelet[2659]: E1105 23:56:52.832002 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.831723 kubelet[2659]: E1105 23:56:52.832140 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.831723 kubelet[2659]: W1105 23:56:52.832149 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.831723 kubelet[2659]: E1105 23:56:52.832157 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.832523 kubelet[2659]: E1105 23:56:52.832491 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.832523 kubelet[2659]: W1105 23:56:52.832513 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.832523 kubelet[2659]: E1105 23:56:52.832525 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.833544 kubelet[2659]: E1105 23:56:52.833515 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.833544 kubelet[2659]: W1105 23:56:52.833533 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.833544 kubelet[2659]: E1105 23:56:52.833546 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.833734 kubelet[2659]: E1105 23:56:52.833716 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.833734 kubelet[2659]: W1105 23:56:52.833728 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.833734 kubelet[2659]: E1105 23:56:52.833737 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.833908 kubelet[2659]: E1105 23:56:52.833880 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.833908 kubelet[2659]: W1105 23:56:52.833893 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.833908 kubelet[2659]: E1105 23:56:52.833902 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.834075 kubelet[2659]: E1105 23:56:52.834054 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.834075 kubelet[2659]: W1105 23:56:52.834066 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.834075 kubelet[2659]: E1105 23:56:52.834075 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.834788 kubelet[2659]: E1105 23:56:52.834229 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.834788 kubelet[2659]: W1105 23:56:52.834243 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.834788 kubelet[2659]: E1105 23:56:52.834252 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.834788 kubelet[2659]: E1105 23:56:52.834421 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.834788 kubelet[2659]: W1105 23:56:52.834442 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.834788 kubelet[2659]: E1105 23:56:52.834451 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.835310 kubelet[2659]: E1105 23:56:52.835287 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.835310 kubelet[2659]: W1105 23:56:52.835303 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.835392 kubelet[2659]: E1105 23:56:52.835316 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.835576 kubelet[2659]: E1105 23:56:52.835552 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.835576 kubelet[2659]: W1105 23:56:52.835568 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.835576 kubelet[2659]: E1105 23:56:52.835579 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.836019 kubelet[2659]: E1105 23:56:52.835993 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.836019 kubelet[2659]: W1105 23:56:52.836008 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.836019 kubelet[2659]: E1105 23:56:52.836020 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.836226 kubelet[2659]: E1105 23:56:52.836209 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.836226 kubelet[2659]: W1105 23:56:52.836220 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.836990 kubelet[2659]: E1105 23:56:52.836230 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.837572 kubelet[2659]: E1105 23:56:52.837549 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.837572 kubelet[2659]: W1105 23:56:52.837565 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.837572 kubelet[2659]: E1105 23:56:52.837576 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.837811 kubelet[2659]: E1105 23:56:52.837791 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.837811 kubelet[2659]: W1105 23:56:52.837804 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.837886 kubelet[2659]: E1105 23:56:52.837814 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.838022 kubelet[2659]: E1105 23:56:52.838008 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.838022 kubelet[2659]: W1105 23:56:52.838020 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.838078 kubelet[2659]: E1105 23:56:52.838031 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.838310 kubelet[2659]: E1105 23:56:52.838287 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.838310 kubelet[2659]: W1105 23:56:52.838307 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.838375 kubelet[2659]: E1105 23:56:52.838321 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.838878 kubelet[2659]: E1105 23:56:52.838852 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.838878 kubelet[2659]: W1105 23:56:52.838872 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.838970 kubelet[2659]: E1105 23:56:52.838885 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.839170 kubelet[2659]: E1105 23:56:52.839140 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.839170 kubelet[2659]: W1105 23:56:52.839157 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.839170 kubelet[2659]: E1105 23:56:52.839169 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.839911 kubelet[2659]: E1105 23:56:52.839892 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.839911 kubelet[2659]: W1105 23:56:52.839908 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.839999 kubelet[2659]: E1105 23:56:52.839921 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.840155 kubelet[2659]: E1105 23:56:52.840126 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.840155 kubelet[2659]: W1105 23:56:52.840141 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.840155 kubelet[2659]: E1105 23:56:52.840154 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.840695 kubelet[2659]: E1105 23:56:52.840674 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.840695 kubelet[2659]: W1105 23:56:52.840692 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.840772 kubelet[2659]: E1105 23:56:52.840706 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.841523 kubelet[2659]: E1105 23:56:52.841491 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.841523 kubelet[2659]: W1105 23:56:52.841517 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.841620 kubelet[2659]: E1105 23:56:52.841530 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.841806 kubelet[2659]: E1105 23:56:52.841787 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.841806 kubelet[2659]: W1105 23:56:52.841801 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.841872 kubelet[2659]: E1105 23:56:52.841813 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.842245 kubelet[2659]: E1105 23:56:52.842226 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.842245 kubelet[2659]: W1105 23:56:52.842240 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.842339 kubelet[2659]: E1105 23:56:52.842252 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.842538 kubelet[2659]: E1105 23:56:52.842500 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.842538 kubelet[2659]: W1105 23:56:52.842524 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.842538 kubelet[2659]: E1105 23:56:52.842536 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.842819 kubelet[2659]: E1105 23:56:52.842786 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.842819 kubelet[2659]: W1105 23:56:52.842796 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.842918 kubelet[2659]: E1105 23:56:52.842861 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.843608 kubelet[2659]: E1105 23:56:52.843578 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.843608 kubelet[2659]: W1105 23:56:52.843597 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.843608 kubelet[2659]: E1105 23:56:52.843610 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.843804 kubelet[2659]: E1105 23:56:52.843774 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.843804 kubelet[2659]: W1105 23:56:52.843785 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.843804 kubelet[2659]: E1105 23:56:52.843793 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.844023 kubelet[2659]: E1105 23:56:52.844007 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.844023 kubelet[2659]: W1105 23:56:52.844024 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.844076 kubelet[2659]: E1105 23:56:52.844036 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:52.844243 kubelet[2659]: E1105 23:56:52.844228 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:56:52.844330 kubelet[2659]: W1105 23:56:52.844310 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:56:52.844363 kubelet[2659]: E1105 23:56:52.844331 2659 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:56:53.668718 kubelet[2659]: E1105 23:56:53.668677 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:56:53.712289 containerd[1536]: time="2025-11-05T23:56:53.712248421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:53.713191 containerd[1536]: time="2025-11-05T23:56:53.713160453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 23:56:53.714454 containerd[1536]: time="2025-11-05T23:56:53.714212764Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:53.716457 containerd[1536]: time="2025-11-05T23:56:53.716418424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:53.716950 containerd[1536]: time="2025-11-05T23:56:53.716921820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.105708183s" Nov 5 23:56:53.717010 containerd[1536]: time="2025-11-05T23:56:53.716951860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 23:56:53.721909 containerd[1536]: time="2025-11-05T23:56:53.721866696Z" level=info msg="CreateContainer within sandbox \"ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 23:56:53.733097 containerd[1536]: time="2025-11-05T23:56:53.732025486Z" level=info msg="Container 007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:53.739532 containerd[1536]: time="2025-11-05T23:56:53.739473220Z" level=info msg="CreateContainer within sandbox \"ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0\"" Nov 5 23:56:53.742096 containerd[1536]: time="2025-11-05T23:56:53.740131615Z" level=info msg="StartContainer for \"007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0\"" Nov 5 23:56:53.742096 containerd[1536]: time="2025-11-05T23:56:53.741546682Z" level=info msg="connecting to shim 007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0" address="unix:///run/containerd/s/de0b555b07bb367cf479cd9b992d77752bee4b3294b16effd2f9e6b715e8bb7d" protocol=ttrpc version=3 Nov 5 23:56:53.769552 systemd[1]: Started cri-containerd-007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0.scope - libcontainer container 007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0. Nov 5 23:56:53.770609 kubelet[2659]: I1105 23:56:53.769797 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:56:53.811912 containerd[1536]: time="2025-11-05T23:56:53.811570742Z" level=info msg="StartContainer for \"007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0\" returns successfully" Nov 5 23:56:53.817475 systemd[1]: cri-containerd-007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0.scope: Deactivated successfully. Nov 5 23:56:53.836971 containerd[1536]: time="2025-11-05T23:56:53.836856278Z" level=info msg="received exit event container_id:\"007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0\" id:\"007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0\" pid:3371 exited_at:{seconds:1762387013 nanos:830285937}" Nov 5 23:56:53.837066 containerd[1536]: time="2025-11-05T23:56:53.836928958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0\" id:\"007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0\" pid:3371 exited_at:{seconds:1762387013 nanos:830285937}" Nov 5 23:56:53.862972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-007dd642b4ded0b54df0116fdbf3946b9387b603e4e522f75ccc9708648f79e0-rootfs.mount: Deactivated successfully. Nov 5 23:56:54.779285 containerd[1536]: time="2025-11-05T23:56:54.779244287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 23:56:55.667316 kubelet[2659]: E1105 23:56:55.666449 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:56:56.459238 containerd[1536]: time="2025-11-05T23:56:56.459194966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:56.460194 containerd[1536]: time="2025-11-05T23:56:56.459594403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 23:56:56.460452 containerd[1536]: time="2025-11-05T23:56:56.460426277Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:56.462233 containerd[1536]: time="2025-11-05T23:56:56.462204384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:56:56.462755 containerd[1536]: time="2025-11-05T23:56:56.462728460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.683443133s" Nov 5 23:56:56.462792 containerd[1536]: time="2025-11-05T23:56:56.462758300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 23:56:56.468875 containerd[1536]: time="2025-11-05T23:56:56.468843975Z" level=info msg="CreateContainer within sandbox \"ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 23:56:56.477814 containerd[1536]: time="2025-11-05T23:56:56.477425033Z" level=info msg="Container 765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:56:56.484419 containerd[1536]: time="2025-11-05T23:56:56.484368622Z" level=info msg="CreateContainer within sandbox \"ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d\"" Nov 5 23:56:56.484962 containerd[1536]: time="2025-11-05T23:56:56.484880498Z" level=info msg="StartContainer for \"765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d\"" Nov 5 23:56:56.486364 containerd[1536]: time="2025-11-05T23:56:56.486335648Z" level=info msg="connecting to shim 765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d" address="unix:///run/containerd/s/de0b555b07bb367cf479cd9b992d77752bee4b3294b16effd2f9e6b715e8bb7d" protocol=ttrpc version=3 Nov 5 23:56:56.517587 systemd[1]: Started cri-containerd-765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d.scope - libcontainer container 765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d. Nov 5 23:56:56.551199 containerd[1536]: time="2025-11-05T23:56:56.551135815Z" level=info msg="StartContainer for \"765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d\" returns successfully" Nov 5 23:56:57.061815 systemd[1]: cri-containerd-765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d.scope: Deactivated successfully. Nov 5 23:56:57.062175 systemd[1]: cri-containerd-765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d.scope: Consumed 429ms CPU time, 178.2M memory peak, 3M read from disk, 165.9M written to disk. Nov 5 23:56:57.077446 containerd[1536]: time="2025-11-05T23:56:57.077376172Z" level=info msg="received exit event container_id:\"765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d\" id:\"765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d\" pid:3434 exited_at:{seconds:1762387017 nanos:77114454}" Nov 5 23:56:57.078233 containerd[1536]: time="2025-11-05T23:56:57.077482451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d\" id:\"765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d\" pid:3434 exited_at:{seconds:1762387017 nanos:77114454}" Nov 5 23:56:57.098427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-765e7660e4d3b182b99d9ddcda703b60ae9c4422c184f7619cda3c7bdbffac6d-rootfs.mount: Deactivated successfully. Nov 5 23:56:57.106331 kubelet[2659]: I1105 23:56:57.106281 2659 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 5 23:56:57.248829 systemd[1]: Created slice kubepods-besteffort-poddcc72545_d360_40c8_82d4_47a2a498215e.slice - libcontainer container kubepods-besteffort-poddcc72545_d360_40c8_82d4_47a2a498215e.slice. Nov 5 23:56:57.260199 systemd[1]: Created slice kubepods-burstable-pod8f12ad9a_8286_4755_9ff3_621ebf3e8e4c.slice - libcontainer container kubepods-burstable-pod8f12ad9a_8286_4755_9ff3_621ebf3e8e4c.slice. Nov 5 23:56:57.268139 systemd[1]: Created slice kubepods-burstable-pod1ef36e51_b849_4bd7_bfa2_ec554b62d4fd.slice - libcontainer container kubepods-burstable-pod1ef36e51_b849_4bd7_bfa2_ec554b62d4fd.slice. Nov 5 23:56:57.269254 kubelet[2659]: I1105 23:56:57.269198 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcc72545-d360-40c8-82d4-47a2a498215e-tigera-ca-bundle\") pod \"calico-kube-controllers-5f4d5b4c8b-52fcx\" (UID: \"dcc72545-d360-40c8-82d4-47a2a498215e\") " pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" Nov 5 23:56:57.269677 kubelet[2659]: I1105 23:56:57.269512 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9940bc-b869-4f48-b46d-3bd6b6993532-config\") pod \"goldmane-7c778bb748-zjm2m\" (UID: \"af9940bc-b869-4f48-b46d-3bd6b6993532\") " pod="calico-system/goldmane-7c778bb748-zjm2m" Nov 5 23:56:57.269806 kubelet[2659]: I1105 23:56:57.269788 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/af9940bc-b869-4f48-b46d-3bd6b6993532-goldmane-key-pair\") pod \"goldmane-7c778bb748-zjm2m\" (UID: \"af9940bc-b869-4f48-b46d-3bd6b6993532\") " pod="calico-system/goldmane-7c778bb748-zjm2m" Nov 5 23:56:57.270106 kubelet[2659]: I1105 23:56:57.270030 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6m8\" (UniqueName: \"kubernetes.io/projected/dcc72545-d360-40c8-82d4-47a2a498215e-kube-api-access-zg6m8\") pod \"calico-kube-controllers-5f4d5b4c8b-52fcx\" (UID: \"dcc72545-d360-40c8-82d4-47a2a498215e\") " pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" Nov 5 23:56:57.270246 kubelet[2659]: I1105 23:56:57.270231 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5l2k\" (UniqueName: \"kubernetes.io/projected/36e52d26-98b7-4979-90a9-0d6b22e8f358-kube-api-access-x5l2k\") pod \"calico-apiserver-76787f47fb-22hgp\" (UID: \"36e52d26-98b7-4979-90a9-0d6b22e8f358\") " pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" Nov 5 23:56:57.270623 kubelet[2659]: I1105 23:56:57.270533 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36e52d26-98b7-4979-90a9-0d6b22e8f358-calico-apiserver-certs\") pod \"calico-apiserver-76787f47fb-22hgp\" (UID: \"36e52d26-98b7-4979-90a9-0d6b22e8f358\") " pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" Nov 5 23:56:57.270863 kubelet[2659]: I1105 23:56:57.270828 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6609a361-d398-4c22-8390-90d26c743fbb-whisker-backend-key-pair\") pod \"whisker-6959974c57-2rkrp\" (UID: \"6609a361-d398-4c22-8390-90d26c743fbb\") " pod="calico-system/whisker-6959974c57-2rkrp" Nov 5 23:56:57.272459 kubelet[2659]: I1105 23:56:57.271036 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af9940bc-b869-4f48-b46d-3bd6b6993532-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-zjm2m\" (UID: \"af9940bc-b869-4f48-b46d-3bd6b6993532\") " pod="calico-system/goldmane-7c778bb748-zjm2m" Nov 5 23:56:57.272596 kubelet[2659]: I1105 23:56:57.272579 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92bm\" (UniqueName: \"kubernetes.io/projected/af9940bc-b869-4f48-b46d-3bd6b6993532-kube-api-access-h92bm\") pod \"goldmane-7c778bb748-zjm2m\" (UID: \"af9940bc-b869-4f48-b46d-3bd6b6993532\") " pod="calico-system/goldmane-7c778bb748-zjm2m" Nov 5 23:56:57.272728 kubelet[2659]: I1105 23:56:57.272657 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f12ad9a-8286-4755-9ff3-621ebf3e8e4c-config-volume\") pod \"coredns-66bc5c9577-fk6hd\" (UID: \"8f12ad9a-8286-4755-9ff3-621ebf3e8e4c\") " pod="kube-system/coredns-66bc5c9577-fk6hd" Nov 5 23:56:57.272728 kubelet[2659]: I1105 23:56:57.272676 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8cd\" (UniqueName: \"kubernetes.io/projected/8f12ad9a-8286-4755-9ff3-621ebf3e8e4c-kube-api-access-xk8cd\") pod \"coredns-66bc5c9577-fk6hd\" (UID: \"8f12ad9a-8286-4755-9ff3-621ebf3e8e4c\") " pod="kube-system/coredns-66bc5c9577-fk6hd" Nov 5 23:56:57.272813 kubelet[2659]: I1105 23:56:57.272802 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6l79\" (UniqueName: \"kubernetes.io/projected/6609a361-d398-4c22-8390-90d26c743fbb-kube-api-access-z6l79\") pod \"whisker-6959974c57-2rkrp\" (UID: \"6609a361-d398-4c22-8390-90d26c743fbb\") " pod="calico-system/whisker-6959974c57-2rkrp" Nov 5 23:56:57.272955 kubelet[2659]: I1105 23:56:57.272880 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7hj\" (UniqueName: \"kubernetes.io/projected/1ef36e51-b849-4bd7-bfa2-ec554b62d4fd-kube-api-access-bh7hj\") pod \"coredns-66bc5c9577-qwpwp\" (UID: \"1ef36e51-b849-4bd7-bfa2-ec554b62d4fd\") " pod="kube-system/coredns-66bc5c9577-qwpwp" Nov 5 23:56:57.272955 kubelet[2659]: I1105 23:56:57.272927 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91cfdc17-cd5c-4271-8452-f8f6eced611b-calico-apiserver-certs\") pod \"calico-apiserver-76787f47fb-qqx58\" (UID: \"91cfdc17-cd5c-4271-8452-f8f6eced611b\") " pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" Nov 5 23:56:57.273015 kubelet[2659]: I1105 23:56:57.272961 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6609a361-d398-4c22-8390-90d26c743fbb-whisker-ca-bundle\") pod \"whisker-6959974c57-2rkrp\" (UID: \"6609a361-d398-4c22-8390-90d26c743fbb\") " pod="calico-system/whisker-6959974c57-2rkrp" Nov 5 23:56:57.273015 kubelet[2659]: I1105 23:56:57.272999 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ef36e51-b849-4bd7-bfa2-ec554b62d4fd-config-volume\") pod \"coredns-66bc5c9577-qwpwp\" (UID: \"1ef36e51-b849-4bd7-bfa2-ec554b62d4fd\") " pod="kube-system/coredns-66bc5c9577-qwpwp" Nov 5 23:56:57.273063 kubelet[2659]: I1105 23:56:57.273034 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdgx\" (UniqueName: \"kubernetes.io/projected/91cfdc17-cd5c-4271-8452-f8f6eced611b-kube-api-access-lmdgx\") pod \"calico-apiserver-76787f47fb-qqx58\" (UID: \"91cfdc17-cd5c-4271-8452-f8f6eced611b\") " pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" Nov 5 23:56:57.278882 systemd[1]: Created slice kubepods-besteffort-pod91cfdc17_cd5c_4271_8452_f8f6eced611b.slice - libcontainer container kubepods-besteffort-pod91cfdc17_cd5c_4271_8452_f8f6eced611b.slice. Nov 5 23:56:57.290948 systemd[1]: Created slice kubepods-besteffort-pod36e52d26_98b7_4979_90a9_0d6b22e8f358.slice - libcontainer container kubepods-besteffort-pod36e52d26_98b7_4979_90a9_0d6b22e8f358.slice. Nov 5 23:56:57.305669 systemd[1]: Created slice kubepods-besteffort-pod6609a361_d398_4c22_8390_90d26c743fbb.slice - libcontainer container kubepods-besteffort-pod6609a361_d398_4c22_8390_90d26c743fbb.slice. Nov 5 23:56:57.312212 systemd[1]: Created slice kubepods-besteffort-podaf9940bc_b869_4f48_b46d_3bd6b6993532.slice - libcontainer container kubepods-besteffort-podaf9940bc_b869_4f48_b46d_3bd6b6993532.slice. Nov 5 23:56:57.560557 containerd[1536]: time="2025-11-05T23:56:57.560517108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4d5b4c8b-52fcx,Uid:dcc72545-d360-40c8-82d4-47a2a498215e,Namespace:calico-system,Attempt:0,}" Nov 5 23:56:57.569590 containerd[1536]: time="2025-11-05T23:56:57.569160529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk6hd,Uid:8f12ad9a-8286-4755-9ff3-621ebf3e8e4c,Namespace:kube-system,Attempt:0,}" Nov 5 23:56:57.576281 containerd[1536]: time="2025-11-05T23:56:57.576246641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwpwp,Uid:1ef36e51-b849-4bd7-bfa2-ec554b62d4fd,Namespace:kube-system,Attempt:0,}" Nov 5 23:56:57.587017 containerd[1536]: time="2025-11-05T23:56:57.586973527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-qqx58,Uid:91cfdc17-cd5c-4271-8452-f8f6eced611b,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:56:57.600029 containerd[1536]: time="2025-11-05T23:56:57.599986199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-22hgp,Uid:36e52d26-98b7-4979-90a9-0d6b22e8f358,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:56:57.610792 containerd[1536]: time="2025-11-05T23:56:57.610649246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6959974c57-2rkrp,Uid:6609a361-d398-4c22-8390-90d26c743fbb,Namespace:calico-system,Attempt:0,}" Nov 5 23:56:57.623200 containerd[1536]: time="2025-11-05T23:56:57.623155920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjm2m,Uid:af9940bc-b869-4f48-b46d-3bd6b6993532,Namespace:calico-system,Attempt:0,}" Nov 5 23:56:57.673380 systemd[1]: Created slice kubepods-besteffort-pod7e0e0ade_490b_4bff_b3bc_5b351134410a.slice - libcontainer container kubepods-besteffort-pod7e0e0ade_490b_4bff_b3bc_5b351134410a.slice. Nov 5 23:56:57.678624 containerd[1536]: time="2025-11-05T23:56:57.678583581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lbl86,Uid:7e0e0ade-490b-4bff-b3bc-5b351134410a,Namespace:calico-system,Attempt:0,}" Nov 5 23:56:57.700127 containerd[1536]: time="2025-11-05T23:56:57.693291041Z" level=error msg="Failed to destroy network for sandbox \"48658a6084e19f0ff321edd05f8d07087d596e069c321fee4ed94ef0b530f79e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.701220 containerd[1536]: time="2025-11-05T23:56:57.699892795Z" level=error msg="Failed to destroy network for sandbox \"7033b1ab1e25202b888fd79ec627fbc4546c809452790a189159a993a34b2a45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.702879 containerd[1536]: time="2025-11-05T23:56:57.702825735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-qqx58,Uid:91cfdc17-cd5c-4271-8452-f8f6eced611b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48658a6084e19f0ff321edd05f8d07087d596e069c321fee4ed94ef0b530f79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.703221 kubelet[2659]: E1105 23:56:57.703177 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48658a6084e19f0ff321edd05f8d07087d596e069c321fee4ed94ef0b530f79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.703295 kubelet[2659]: E1105 23:56:57.703243 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48658a6084e19f0ff321edd05f8d07087d596e069c321fee4ed94ef0b530f79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" Nov 5 23:56:57.703295 kubelet[2659]: E1105 23:56:57.703268 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48658a6084e19f0ff321edd05f8d07087d596e069c321fee4ed94ef0b530f79e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" Nov 5 23:56:57.703341 kubelet[2659]: E1105 23:56:57.703317 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76787f47fb-qqx58_calico-apiserver(91cfdc17-cd5c-4271-8452-f8f6eced611b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76787f47fb-qqx58_calico-apiserver(91cfdc17-cd5c-4271-8452-f8f6eced611b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48658a6084e19f0ff321edd05f8d07087d596e069c321fee4ed94ef0b530f79e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" podUID="91cfdc17-cd5c-4271-8452-f8f6eced611b" Nov 5 23:56:57.703818 containerd[1536]: time="2025-11-05T23:56:57.703776209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-22hgp,Uid:36e52d26-98b7-4979-90a9-0d6b22e8f358,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7033b1ab1e25202b888fd79ec627fbc4546c809452790a189159a993a34b2a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.704057 kubelet[2659]: E1105 23:56:57.704023 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7033b1ab1e25202b888fd79ec627fbc4546c809452790a189159a993a34b2a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.704112 kubelet[2659]: E1105 23:56:57.704063 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7033b1ab1e25202b888fd79ec627fbc4546c809452790a189159a993a34b2a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" Nov 5 23:56:57.704112 kubelet[2659]: E1105 23:56:57.704079 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7033b1ab1e25202b888fd79ec627fbc4546c809452790a189159a993a34b2a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" Nov 5 23:56:57.704166 kubelet[2659]: E1105 23:56:57.704114 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76787f47fb-22hgp_calico-apiserver(36e52d26-98b7-4979-90a9-0d6b22e8f358)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76787f47fb-22hgp_calico-apiserver(36e52d26-98b7-4979-90a9-0d6b22e8f358)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7033b1ab1e25202b888fd79ec627fbc4546c809452790a189159a993a34b2a45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" podUID="36e52d26-98b7-4979-90a9-0d6b22e8f358" Nov 5 23:56:57.712357 containerd[1536]: time="2025-11-05T23:56:57.712306590Z" level=error msg="Failed to destroy network for sandbox \"a8fd5ab9f71317b61ba9aa00bd37581477b24fb41e689e0e1b6cee4cb6a98966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.713325 containerd[1536]: time="2025-11-05T23:56:57.713274424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6959974c57-2rkrp,Uid:6609a361-d398-4c22-8390-90d26c743fbb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fd5ab9f71317b61ba9aa00bd37581477b24fb41e689e0e1b6cee4cb6a98966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.713586 kubelet[2659]: E1105 23:56:57.713542 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fd5ab9f71317b61ba9aa00bd37581477b24fb41e689e0e1b6cee4cb6a98966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.713646 kubelet[2659]: E1105 23:56:57.713600 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fd5ab9f71317b61ba9aa00bd37581477b24fb41e689e0e1b6cee4cb6a98966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6959974c57-2rkrp" Nov 5 23:56:57.713646 kubelet[2659]: E1105 23:56:57.713619 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fd5ab9f71317b61ba9aa00bd37581477b24fb41e689e0e1b6cee4cb6a98966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6959974c57-2rkrp" Nov 5 23:56:57.713693 kubelet[2659]: E1105 23:56:57.713668 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6959974c57-2rkrp_calico-system(6609a361-d398-4c22-8390-90d26c743fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6959974c57-2rkrp_calico-system(6609a361-d398-4c22-8390-90d26c743fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8fd5ab9f71317b61ba9aa00bd37581477b24fb41e689e0e1b6cee4cb6a98966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6959974c57-2rkrp" podUID="6609a361-d398-4c22-8390-90d26c743fbb" Nov 5 23:56:57.715580 containerd[1536]: time="2025-11-05T23:56:57.715538888Z" level=error msg="Failed to destroy network for sandbox \"5c8ba4b4cf9720900fde7d8712693391e12411a6dd46faff4612c9695801879b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.716352 containerd[1536]: time="2025-11-05T23:56:57.716289243Z" level=error msg="Failed to destroy network for sandbox \"af1c39af7659c0de655f9393e329d4e6ed00771b84a652fd71988949cc9bf01e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.716546 containerd[1536]: time="2025-11-05T23:56:57.716517762Z" level=error msg="Failed to destroy network for sandbox \"8ffe430614d3467b2e2a5920101f1d9ff926c18739d6edd1fc8af1b16d153690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.716826 containerd[1536]: time="2025-11-05T23:56:57.716782920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjm2m,Uid:af9940bc-b869-4f48-b46d-3bd6b6993532,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c8ba4b4cf9720900fde7d8712693391e12411a6dd46faff4612c9695801879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.717002 kubelet[2659]: E1105 23:56:57.716968 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c8ba4b4cf9720900fde7d8712693391e12411a6dd46faff4612c9695801879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.717056 kubelet[2659]: E1105 23:56:57.717011 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c8ba4b4cf9720900fde7d8712693391e12411a6dd46faff4612c9695801879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zjm2m" Nov 5 23:56:57.717056 kubelet[2659]: E1105 23:56:57.717031 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c8ba4b4cf9720900fde7d8712693391e12411a6dd46faff4612c9695801879b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zjm2m" Nov 5 23:56:57.717097 kubelet[2659]: E1105 23:56:57.717077 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-zjm2m_calico-system(af9940bc-b869-4f48-b46d-3bd6b6993532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-zjm2m_calico-system(af9940bc-b869-4f48-b46d-3bd6b6993532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c8ba4b4cf9720900fde7d8712693391e12411a6dd46faff4612c9695801879b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zjm2m" podUID="af9940bc-b869-4f48-b46d-3bd6b6993532" Nov 5 23:56:57.718483 containerd[1536]: time="2025-11-05T23:56:57.717954632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk6hd,Uid:8f12ad9a-8286-4755-9ff3-621ebf3e8e4c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1c39af7659c0de655f9393e329d4e6ed00771b84a652fd71988949cc9bf01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.718611 kubelet[2659]: E1105 23:56:57.718360 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1c39af7659c0de655f9393e329d4e6ed00771b84a652fd71988949cc9bf01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.718611 kubelet[2659]: E1105 23:56:57.718397 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1c39af7659c0de655f9393e329d4e6ed00771b84a652fd71988949cc9bf01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fk6hd" Nov 5 23:56:57.718611 kubelet[2659]: E1105 23:56:57.718411 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af1c39af7659c0de655f9393e329d4e6ed00771b84a652fd71988949cc9bf01e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fk6hd" Nov 5 23:56:57.718683 kubelet[2659]: E1105 23:56:57.718525 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fk6hd_kube-system(8f12ad9a-8286-4755-9ff3-621ebf3e8e4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fk6hd_kube-system(8f12ad9a-8286-4755-9ff3-621ebf3e8e4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af1c39af7659c0de655f9393e329d4e6ed00771b84a652fd71988949cc9bf01e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fk6hd" podUID="8f12ad9a-8286-4755-9ff3-621ebf3e8e4c" Nov 5 23:56:57.719100 containerd[1536]: time="2025-11-05T23:56:57.719058584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4d5b4c8b-52fcx,Uid:dcc72545-d360-40c8-82d4-47a2a498215e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ffe430614d3467b2e2a5920101f1d9ff926c18739d6edd1fc8af1b16d153690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.719273 kubelet[2659]: E1105 23:56:57.719241 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ffe430614d3467b2e2a5920101f1d9ff926c18739d6edd1fc8af1b16d153690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.719315 kubelet[2659]: E1105 23:56:57.719281 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ffe430614d3467b2e2a5920101f1d9ff926c18739d6edd1fc8af1b16d153690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" Nov 5 23:56:57.719315 kubelet[2659]: E1105 23:56:57.719296 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ffe430614d3467b2e2a5920101f1d9ff926c18739d6edd1fc8af1b16d153690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" Nov 5 23:56:57.719366 kubelet[2659]: E1105 23:56:57.719332 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f4d5b4c8b-52fcx_calico-system(dcc72545-d360-40c8-82d4-47a2a498215e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f4d5b4c8b-52fcx_calico-system(dcc72545-d360-40c8-82d4-47a2a498215e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ffe430614d3467b2e2a5920101f1d9ff926c18739d6edd1fc8af1b16d153690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" podUID="dcc72545-d360-40c8-82d4-47a2a498215e" Nov 5 23:56:57.721323 containerd[1536]: time="2025-11-05T23:56:57.721188610Z" level=error msg="Failed to destroy network for sandbox \"03e160bb9209f76a1374e96d47a5436508e309aa51798afffda48d99fc2480c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.723801 containerd[1536]: time="2025-11-05T23:56:57.723728432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwpwp,Uid:1ef36e51-b849-4bd7-bfa2-ec554b62d4fd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e160bb9209f76a1374e96d47a5436508e309aa51798afffda48d99fc2480c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.724219 kubelet[2659]: E1105 23:56:57.724136 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e160bb9209f76a1374e96d47a5436508e309aa51798afffda48d99fc2480c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.724289 kubelet[2659]: E1105 23:56:57.724266 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e160bb9209f76a1374e96d47a5436508e309aa51798afffda48d99fc2480c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qwpwp" Nov 5 23:56:57.724328 kubelet[2659]: E1105 23:56:57.724286 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e160bb9209f76a1374e96d47a5436508e309aa51798afffda48d99fc2480c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qwpwp" Nov 5 23:56:57.724350 kubelet[2659]: E1105 23:56:57.724323 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qwpwp_kube-system(1ef36e51-b849-4bd7-bfa2-ec554b62d4fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qwpwp_kube-system(1ef36e51-b849-4bd7-bfa2-ec554b62d4fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03e160bb9209f76a1374e96d47a5436508e309aa51798afffda48d99fc2480c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qwpwp" podUID="1ef36e51-b849-4bd7-bfa2-ec554b62d4fd" Nov 5 23:56:57.757112 containerd[1536]: time="2025-11-05T23:56:57.757045645Z" level=error msg="Failed to destroy network for sandbox \"80bf762d99281e2809489a1bb6fc75cc5e6dbd3d43c2ac9a450510590e80b439\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.758113 containerd[1536]: time="2025-11-05T23:56:57.758066918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lbl86,Uid:7e0e0ade-490b-4bff-b3bc-5b351134410a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bf762d99281e2809489a1bb6fc75cc5e6dbd3d43c2ac9a450510590e80b439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.758308 kubelet[2659]: E1105 23:56:57.758271 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bf762d99281e2809489a1bb6fc75cc5e6dbd3d43c2ac9a450510590e80b439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:56:57.758368 kubelet[2659]: E1105 23:56:57.758329 2659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bf762d99281e2809489a1bb6fc75cc5e6dbd3d43c2ac9a450510590e80b439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lbl86" Nov 5 23:56:57.758368 kubelet[2659]: E1105 23:56:57.758353 2659 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80bf762d99281e2809489a1bb6fc75cc5e6dbd3d43c2ac9a450510590e80b439\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lbl86" Nov 5 23:56:57.758435 kubelet[2659]: E1105 23:56:57.758401 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lbl86_calico-system(7e0e0ade-490b-4bff-b3bc-5b351134410a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lbl86_calico-system(7e0e0ade-490b-4bff-b3bc-5b351134410a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80bf762d99281e2809489a1bb6fc75cc5e6dbd3d43c2ac9a450510590e80b439\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:56:57.791511 containerd[1536]: time="2025-11-05T23:56:57.791459529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 23:56:58.476399 systemd[1]: run-netns-cni\x2d9d3fe918\x2d29bc\x2ddf8a\x2d16f2\x2d153925987459.mount: Deactivated successfully. Nov 5 23:56:58.476516 systemd[1]: run-netns-cni\x2dd8930363\x2da556\x2d80e3\x2d9ae5\x2d0f8271515014.mount: Deactivated successfully. Nov 5 23:56:58.476563 systemd[1]: run-netns-cni\x2d06c36481\x2d6bb3\x2d8efe\x2d5d43\x2ddb0cb8253d72.mount: Deactivated successfully. Nov 5 23:56:58.476606 systemd[1]: run-netns-cni\x2d19af45b6\x2d970c\x2d7c1a\x2dfacc\x2dc955bb335fee.mount: Deactivated successfully. Nov 5 23:57:00.787156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424555110.mount: Deactivated successfully. Nov 5 23:57:00.963013 containerd[1536]: time="2025-11-05T23:57:00.962963857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:57:00.963910 containerd[1536]: time="2025-11-05T23:57:00.963756133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 23:57:00.964702 containerd[1536]: time="2025-11-05T23:57:00.964664168Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:57:00.969524 containerd[1536]: time="2025-11-05T23:57:00.969162062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:57:00.970153 containerd[1536]: time="2025-11-05T23:57:00.969763019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.17826661s" Nov 5 23:57:00.970153 containerd[1536]: time="2025-11-05T23:57:00.969795339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 23:57:00.989351 containerd[1536]: time="2025-11-05T23:57:00.989299909Z" level=info msg="CreateContainer within sandbox \"ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 23:57:00.996338 containerd[1536]: time="2025-11-05T23:57:00.996294469Z" level=info msg="Container b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:57:01.005675 containerd[1536]: time="2025-11-05T23:57:01.005625379Z" level=info msg="CreateContainer within sandbox \"ec0e41def03564f417c09bda37096cac8a45a89e0a6cefe404ce0c9716ee0309\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153\"" Nov 5 23:57:01.014671 containerd[1536]: time="2025-11-05T23:57:01.014601971Z" level=info msg="StartContainer for \"b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153\"" Nov 5 23:57:01.016048 containerd[1536]: time="2025-11-05T23:57:01.016021484Z" level=info msg="connecting to shim b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153" address="unix:///run/containerd/s/de0b555b07bb367cf479cd9b992d77752bee4b3294b16effd2f9e6b715e8bb7d" protocol=ttrpc version=3 Nov 5 23:57:01.038609 systemd[1]: Started cri-containerd-b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153.scope - libcontainer container b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153. Nov 5 23:57:01.073683 containerd[1536]: time="2025-11-05T23:57:01.073644899Z" level=info msg="StartContainer for \"b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153\" returns successfully" Nov 5 23:57:01.203638 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 23:57:01.203762 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 23:57:01.403264 kubelet[2659]: I1105 23:57:01.403208 2659 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6l79\" (UniqueName: \"kubernetes.io/projected/6609a361-d398-4c22-8390-90d26c743fbb-kube-api-access-z6l79\") pod \"6609a361-d398-4c22-8390-90d26c743fbb\" (UID: \"6609a361-d398-4c22-8390-90d26c743fbb\") " Nov 5 23:57:01.404103 kubelet[2659]: I1105 23:57:01.403355 2659 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6609a361-d398-4c22-8390-90d26c743fbb-whisker-backend-key-pair\") pod \"6609a361-d398-4c22-8390-90d26c743fbb\" (UID: \"6609a361-d398-4c22-8390-90d26c743fbb\") " Nov 5 23:57:01.404103 kubelet[2659]: I1105 23:57:01.403650 2659 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6609a361-d398-4c22-8390-90d26c743fbb-whisker-ca-bundle\") pod \"6609a361-d398-4c22-8390-90d26c743fbb\" (UID: \"6609a361-d398-4c22-8390-90d26c743fbb\") " Nov 5 23:57:01.415445 kubelet[2659]: I1105 23:57:01.414723 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6609a361-d398-4c22-8390-90d26c743fbb-kube-api-access-z6l79" (OuterVolumeSpecName: "kube-api-access-z6l79") pod "6609a361-d398-4c22-8390-90d26c743fbb" (UID: "6609a361-d398-4c22-8390-90d26c743fbb"). InnerVolumeSpecName "kube-api-access-z6l79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 23:57:01.416061 kubelet[2659]: I1105 23:57:01.416030 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6609a361-d398-4c22-8390-90d26c743fbb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6609a361-d398-4c22-8390-90d26c743fbb" (UID: "6609a361-d398-4c22-8390-90d26c743fbb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 23:57:01.418965 kubelet[2659]: I1105 23:57:01.418935 2659 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6609a361-d398-4c22-8390-90d26c743fbb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6609a361-d398-4c22-8390-90d26c743fbb" (UID: "6609a361-d398-4c22-8390-90d26c743fbb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 23:57:01.504293 kubelet[2659]: I1105 23:57:01.504260 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6609a361-d398-4c22-8390-90d26c743fbb-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 23:57:01.504454 kubelet[2659]: I1105 23:57:01.504442 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6l79\" (UniqueName: \"kubernetes.io/projected/6609a361-d398-4c22-8390-90d26c743fbb-kube-api-access-z6l79\") on node \"localhost\" DevicePath \"\"" Nov 5 23:57:01.504514 kubelet[2659]: I1105 23:57:01.504506 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6609a361-d398-4c22-8390-90d26c743fbb-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 23:57:01.787998 systemd[1]: var-lib-kubelet-pods-6609a361\x2dd398\x2d4c22\x2d8390\x2d90d26c743fbb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6l79.mount: Deactivated successfully. Nov 5 23:57:01.788094 systemd[1]: var-lib-kubelet-pods-6609a361\x2dd398\x2d4c22\x2d8390\x2d90d26c743fbb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 23:57:01.805033 systemd[1]: Removed slice kubepods-besteffort-pod6609a361_d398_4c22_8390_90d26c743fbb.slice - libcontainer container kubepods-besteffort-pod6609a361_d398_4c22_8390_90d26c743fbb.slice. Nov 5 23:57:01.831828 kubelet[2659]: I1105 23:57:01.831712 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w75xx" podStartSLOduration=2.111581503 podStartE2EDuration="12.825841166s" podCreationTimestamp="2025-11-05 23:56:49 +0000 UTC" firstStartedPulling="2025-11-05 23:56:50.256109632 +0000 UTC m=+23.686001792" lastFinishedPulling="2025-11-05 23:57:00.970369335 +0000 UTC m=+34.400261455" observedRunningTime="2025-11-05 23:57:01.824823932 +0000 UTC m=+35.254716092" watchObservedRunningTime="2025-11-05 23:57:01.825841166 +0000 UTC m=+35.255733286" Nov 5 23:57:01.874277 systemd[1]: Created slice kubepods-besteffort-podd80b8472_e237_414c_aab6_2c809f90c36e.slice - libcontainer container kubepods-besteffort-podd80b8472_e237_414c_aab6_2c809f90c36e.slice. Nov 5 23:57:01.907695 kubelet[2659]: I1105 23:57:01.907646 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d80b8472-e237-414c-aab6-2c809f90c36e-whisker-backend-key-pair\") pod \"whisker-5b85fb9cd9-k57j5\" (UID: \"d80b8472-e237-414c-aab6-2c809f90c36e\") " pod="calico-system/whisker-5b85fb9cd9-k57j5" Nov 5 23:57:01.907695 kubelet[2659]: I1105 23:57:01.907691 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4bj\" (UniqueName: \"kubernetes.io/projected/d80b8472-e237-414c-aab6-2c809f90c36e-kube-api-access-zg4bj\") pod \"whisker-5b85fb9cd9-k57j5\" (UID: \"d80b8472-e237-414c-aab6-2c809f90c36e\") " pod="calico-system/whisker-5b85fb9cd9-k57j5" Nov 5 23:57:01.907832 kubelet[2659]: I1105 23:57:01.907771 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80b8472-e237-414c-aab6-2c809f90c36e-whisker-ca-bundle\") pod \"whisker-5b85fb9cd9-k57j5\" (UID: \"d80b8472-e237-414c-aab6-2c809f90c36e\") " pod="calico-system/whisker-5b85fb9cd9-k57j5" Nov 5 23:57:02.179454 containerd[1536]: time="2025-11-05T23:57:02.179392078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b85fb9cd9-k57j5,Uid:d80b8472-e237-414c-aab6-2c809f90c36e,Namespace:calico-system,Attempt:0,}" Nov 5 23:57:02.329423 systemd-networkd[1433]: cali0292f8080ee: Link UP Nov 5 23:57:02.329918 systemd-networkd[1433]: cali0292f8080ee: Gained carrier Nov 5 23:57:02.347305 containerd[1536]: 2025-11-05 23:57:02.202 [INFO][3816] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 23:57:02.347305 containerd[1536]: 2025-11-05 23:57:02.235 [INFO][3816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0 whisker-5b85fb9cd9- calico-system d80b8472-e237-414c-aab6-2c809f90c36e 863 0 2025-11-05 23:57:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b85fb9cd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b85fb9cd9-k57j5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0292f8080ee [] [] }} ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-" Nov 5 23:57:02.347305 containerd[1536]: 2025-11-05 23:57:02.235 [INFO][3816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" Nov 5 23:57:02.347305 containerd[1536]: 2025-11-05 23:57:02.289 [INFO][3829] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" HandleID="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Workload="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.289 [INFO][3829] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" HandleID="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Workload="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b1b10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b85fb9cd9-k57j5", "timestamp":"2025-11-05 23:57:02.289642412 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.289 [INFO][3829] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.289 [INFO][3829] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.290 [INFO][3829] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.300 [INFO][3829] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" host="localhost" Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.304 [INFO][3829] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.308 [INFO][3829] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.310 [INFO][3829] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.312 [INFO][3829] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:02.347546 containerd[1536]: 2025-11-05 23:57:02.312 [INFO][3829] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" host="localhost" Nov 5 23:57:02.347747 containerd[1536]: 2025-11-05 23:57:02.313 [INFO][3829] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40 Nov 5 23:57:02.347747 containerd[1536]: 2025-11-05 23:57:02.316 [INFO][3829] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" host="localhost" Nov 5 23:57:02.347747 containerd[1536]: 2025-11-05 23:57:02.320 [INFO][3829] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" host="localhost" Nov 5 23:57:02.347747 containerd[1536]: 2025-11-05 23:57:02.320 [INFO][3829] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" host="localhost" Nov 5 23:57:02.347747 containerd[1536]: 2025-11-05 23:57:02.320 [INFO][3829] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:02.347747 containerd[1536]: 2025-11-05 23:57:02.320 [INFO][3829] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" HandleID="k8s-pod-network.4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Workload="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" Nov 5 23:57:02.347851 containerd[1536]: 2025-11-05 23:57:02.323 [INFO][3816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0", GenerateName:"whisker-5b85fb9cd9-", Namespace:"calico-system", SelfLink:"", UID:"d80b8472-e237-414c-aab6-2c809f90c36e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b85fb9cd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b85fb9cd9-k57j5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0292f8080ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:02.347851 containerd[1536]: 2025-11-05 23:57:02.323 [INFO][3816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" Nov 5 23:57:02.347915 containerd[1536]: 2025-11-05 23:57:02.323 [INFO][3816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0292f8080ee ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" Nov 5 23:57:02.347915 containerd[1536]: 2025-11-05 23:57:02.330 [INFO][3816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" Nov 5 23:57:02.347953 containerd[1536]: 2025-11-05 23:57:02.330 [INFO][3816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0", GenerateName:"whisker-5b85fb9cd9-", Namespace:"calico-system", SelfLink:"", UID:"d80b8472-e237-414c-aab6-2c809f90c36e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b85fb9cd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40", Pod:"whisker-5b85fb9cd9-k57j5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0292f8080ee", MAC:"a6:31:71:d4:98:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:02.347996 containerd[1536]: 2025-11-05 23:57:02.340 [INFO][3816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" Namespace="calico-system" Pod="whisker-5b85fb9cd9-k57j5" WorkloadEndpoint="localhost-k8s-whisker--5b85fb9cd9--k57j5-eth0" Nov 5 23:57:02.385996 containerd[1536]: time="2025-11-05T23:57:02.385945175Z" level=info msg="connecting to shim 4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40" address="unix:///run/containerd/s/245e4b2b72fc635d9da1146f888a215f228155cd55f2c1bccd173c6ef0e03d75" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:02.407648 systemd[1]: Started cri-containerd-4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40.scope - libcontainer container 4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40. Nov 5 23:57:02.419154 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:02.437863 containerd[1536]: time="2025-11-05T23:57:02.437758198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b85fb9cd9-k57j5,Uid:d80b8472-e237-414c-aab6-2c809f90c36e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f4fec946c884472d689b823b4c2490cb72e1bd924b67b0247ab4d00b509ff40\"" Nov 5 23:57:02.439903 containerd[1536]: time="2025-11-05T23:57:02.439863468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:57:02.672866 kubelet[2659]: I1105 23:57:02.672822 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6609a361-d398-4c22-8390-90d26c743fbb" path="/var/lib/kubelet/pods/6609a361-d398-4c22-8390-90d26c743fbb/volumes" Nov 5 23:57:02.742331 containerd[1536]: time="2025-11-05T23:57:02.742202411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:02.743406 containerd[1536]: time="2025-11-05T23:57:02.743360565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:57:02.743543 containerd[1536]: time="2025-11-05T23:57:02.743471844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:57:02.753658 kubelet[2659]: E1105 23:57:02.753609 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:57:02.754026 kubelet[2659]: E1105 23:57:02.753992 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:57:02.757687 kubelet[2659]: E1105 23:57:02.757634 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b85fb9cd9-k57j5_calico-system(d80b8472-e237-414c-aab6-2c809f90c36e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:02.758914 containerd[1536]: time="2025-11-05T23:57:02.758594130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:57:02.936817 containerd[1536]: time="2025-11-05T23:57:02.936770727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153\" id:\"486a9cf32877994505ad37cb5051c4a309471970e7a05b3429eab44fdedccba6\" pid:4009 exit_status:1 exited_at:{seconds:1762387022 nanos:936494729}" Nov 5 23:57:02.961645 containerd[1536]: time="2025-11-05T23:57:02.961543085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:02.962392 containerd[1536]: time="2025-11-05T23:57:02.962359881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:57:02.962480 containerd[1536]: time="2025-11-05T23:57:02.962459880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:57:02.962665 kubelet[2659]: E1105 23:57:02.962622 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:57:02.962719 kubelet[2659]: E1105 23:57:02.962670 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:57:02.962973 kubelet[2659]: E1105 23:57:02.962740 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b85fb9cd9-k57j5_calico-system(d80b8472-e237-414c-aab6-2c809f90c36e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:02.962973 kubelet[2659]: E1105 23:57:02.962782 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b85fb9cd9-k57j5" podUID="d80b8472-e237-414c-aab6-2c809f90c36e" Nov 5 23:57:03.807094 kubelet[2659]: E1105 23:57:03.807040 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b85fb9cd9-k57j5" podUID="d80b8472-e237-414c-aab6-2c809f90c36e" Nov 5 23:57:03.880527 containerd[1536]: time="2025-11-05T23:57:03.880477607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153\" id:\"9d9f60cad8e547668cc896938f779bbb25ff89ab3c8fe491488d533eb9fdf630\" pid:4060 exit_status:1 exited_at:{seconds:1762387023 nanos:880189288}" Nov 5 23:57:04.205651 systemd-networkd[1433]: cali0292f8080ee: Gained IPv6LL Nov 5 23:57:06.574520 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:49738.service - OpenSSH per-connection server daemon (10.0.0.1:49738). Nov 5 23:57:06.642772 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 49738 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:06.644084 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:06.648905 systemd-logind[1514]: New session 8 of user core. Nov 5 23:57:06.657624 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 23:57:06.813419 sshd[4131]: Connection closed by 10.0.0.1 port 49738 Nov 5 23:57:06.814052 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:06.818572 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:49738.service: Deactivated successfully. Nov 5 23:57:06.820177 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 23:57:06.821405 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Nov 5 23:57:06.823025 systemd-logind[1514]: Removed session 8. Nov 5 23:57:08.669290 containerd[1536]: time="2025-11-05T23:57:08.669236717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk6hd,Uid:8f12ad9a-8286-4755-9ff3-621ebf3e8e4c,Namespace:kube-system,Attempt:0,}" Nov 5 23:57:08.672015 containerd[1536]: time="2025-11-05T23:57:08.671782349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4d5b4c8b-52fcx,Uid:dcc72545-d360-40c8-82d4-47a2a498215e,Namespace:calico-system,Attempt:0,}" Nov 5 23:57:08.672870 containerd[1536]: time="2025-11-05T23:57:08.672822345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lbl86,Uid:7e0e0ade-490b-4bff-b3bc-5b351134410a,Namespace:calico-system,Attempt:0,}" Nov 5 23:57:08.674964 containerd[1536]: time="2025-11-05T23:57:08.674657499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-22hgp,Uid:36e52d26-98b7-4979-90a9-0d6b22e8f358,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:57:08.787918 kubelet[2659]: I1105 23:57:08.787875 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:57:08.859771 systemd-networkd[1433]: cali9888e42a21e: Link UP Nov 5 23:57:08.860204 systemd-networkd[1433]: cali9888e42a21e: Gained carrier Nov 5 23:57:08.875020 containerd[1536]: 2025-11-05 23:57:08.696 [INFO][4195] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 23:57:08.875020 containerd[1536]: 2025-11-05 23:57:08.714 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--fk6hd-eth0 coredns-66bc5c9577- kube-system 8f12ad9a-8286-4755-9ff3-621ebf3e8e4c 799 0 2025-11-05 23:56:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-fk6hd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9888e42a21e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-" Nov 5 23:57:08.875020 containerd[1536]: 2025-11-05 23:57:08.714 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" Nov 5 23:57:08.875020 containerd[1536]: 2025-11-05 23:57:08.771 [INFO][4252] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" HandleID="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Workload="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.771 [INFO][4252] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" HandleID="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Workload="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-fk6hd", "timestamp":"2025-11-05 23:57:08.771401814 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.771 [INFO][4252] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.772 [INFO][4252] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.772 [INFO][4252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.787 [INFO][4252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" host="localhost" Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.802 [INFO][4252] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.813 [INFO][4252] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.817 [INFO][4252] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.826 [INFO][4252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:08.875268 containerd[1536]: 2025-11-05 23:57:08.826 [INFO][4252] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" host="localhost" Nov 5 23:57:08.875523 containerd[1536]: 2025-11-05 23:57:08.830 [INFO][4252] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6 Nov 5 23:57:08.875523 containerd[1536]: 2025-11-05 23:57:08.840 [INFO][4252] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" host="localhost" Nov 5 23:57:08.875523 containerd[1536]: 2025-11-05 23:57:08.848 [INFO][4252] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" host="localhost" Nov 5 23:57:08.875523 containerd[1536]: 2025-11-05 23:57:08.848 [INFO][4252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" host="localhost" Nov 5 23:57:08.875523 containerd[1536]: 2025-11-05 23:57:08.848 [INFO][4252] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:08.875523 containerd[1536]: 2025-11-05 23:57:08.848 [INFO][4252] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" HandleID="k8s-pod-network.5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Workload="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" Nov 5 23:57:08.875632 containerd[1536]: 2025-11-05 23:57:08.855 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fk6hd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8f12ad9a-8286-4755-9ff3-621ebf3e8e4c", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-fk6hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9888e42a21e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:08.875632 containerd[1536]: 2025-11-05 23:57:08.856 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" Nov 5 23:57:08.875632 containerd[1536]: 2025-11-05 23:57:08.856 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9888e42a21e ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" Nov 5 23:57:08.875632 containerd[1536]: 2025-11-05 23:57:08.860 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" Nov 5 23:57:08.875632 containerd[1536]: 2025-11-05 23:57:08.860 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fk6hd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8f12ad9a-8286-4755-9ff3-621ebf3e8e4c", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6", Pod:"coredns-66bc5c9577-fk6hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9888e42a21e", MAC:"f6:40:d2:5b:e2:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:08.875632 containerd[1536]: 2025-11-05 23:57:08.873 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" Namespace="kube-system" Pod="coredns-66bc5c9577-fk6hd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fk6hd-eth0" Nov 5 23:57:08.895072 containerd[1536]: time="2025-11-05T23:57:08.894604640Z" level=info msg="connecting to shim 5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6" address="unix:///run/containerd/s/6b8ce3e0be103c36dcc11c9c0a7dba437c0ea0247dbf601e166eade960293a63" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:08.920141 systemd[1]: Started cri-containerd-5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6.scope - libcontainer container 5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6. Nov 5 23:57:08.933773 systemd-networkd[1433]: calic503a421c0e: Link UP Nov 5 23:57:08.933911 systemd-networkd[1433]: calic503a421c0e: Gained carrier Nov 5 23:57:08.937803 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.717 [INFO][4225] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.749 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0 calico-apiserver-76787f47fb- calico-apiserver 36e52d26-98b7-4979-90a9-0d6b22e8f358 806 0 2025-11-05 23:56:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76787f47fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76787f47fb-22hgp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic503a421c0e [] [] }} ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.749 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.810 [INFO][4262] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" HandleID="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Workload="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.811 [INFO][4262] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" HandleID="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Workload="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034ba70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76787f47fb-22hgp", "timestamp":"2025-11-05 23:57:08.810237003 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.811 [INFO][4262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.849 [INFO][4262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.849 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.889 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.900 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.911 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.913 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.916 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.916 [INFO][4262] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.918 [INFO][4262] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8 Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.922 [INFO][4262] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.927 [INFO][4262] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.927 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" host="localhost" Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.927 [INFO][4262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:08.953740 containerd[1536]: 2025-11-05 23:57:08.927 [INFO][4262] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" HandleID="k8s-pod-network.f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Workload="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" Nov 5 23:57:08.954241 containerd[1536]: 2025-11-05 23:57:08.932 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0", GenerateName:"calico-apiserver-76787f47fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"36e52d26-98b7-4979-90a9-0d6b22e8f358", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76787f47fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76787f47fb-22hgp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic503a421c0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:08.954241 containerd[1536]: 2025-11-05 23:57:08.932 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" Nov 5 23:57:08.954241 containerd[1536]: 2025-11-05 23:57:08.932 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic503a421c0e ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" Nov 5 23:57:08.954241 containerd[1536]: 2025-11-05 23:57:08.934 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" Nov 5 23:57:08.954241 containerd[1536]: 2025-11-05 23:57:08.934 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0", GenerateName:"calico-apiserver-76787f47fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"36e52d26-98b7-4979-90a9-0d6b22e8f358", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76787f47fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8", Pod:"calico-apiserver-76787f47fb-22hgp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic503a421c0e", MAC:"36:a0:34:9d:51:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:08.954241 containerd[1536]: 2025-11-05 23:57:08.950 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-22hgp" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--22hgp-eth0" Nov 5 23:57:08.971417 containerd[1536]: time="2025-11-05T23:57:08.971367182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fk6hd,Uid:8f12ad9a-8286-4755-9ff3-621ebf3e8e4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6\"" Nov 5 23:57:08.981743 containerd[1536]: time="2025-11-05T23:57:08.981494388Z" level=info msg="CreateContainer within sandbox \"5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:57:08.997879 containerd[1536]: time="2025-11-05T23:57:08.997833693Z" level=info msg="Container 9411dc398e349b79cab5ea54559b62e64ba3571cefa1ed254e272c36441a62f2: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:57:09.005239 containerd[1536]: time="2025-11-05T23:57:09.005182229Z" level=info msg="connecting to shim f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8" address="unix:///run/containerd/s/1f5988b0932b4be1e3ac63400e9753432e005cd26b4a2a188363cb33e307817a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:09.007088 containerd[1536]: time="2025-11-05T23:57:09.007013823Z" level=info msg="CreateContainer within sandbox \"5e10ced43910865f8245f32212e396dcdc86b761b24dc6faaac7e149189276c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9411dc398e349b79cab5ea54559b62e64ba3571cefa1ed254e272c36441a62f2\"" Nov 5 23:57:09.007885 containerd[1536]: time="2025-11-05T23:57:09.007854780Z" level=info msg="StartContainer for \"9411dc398e349b79cab5ea54559b62e64ba3571cefa1ed254e272c36441a62f2\"" Nov 5 23:57:09.013252 containerd[1536]: time="2025-11-05T23:57:09.013215044Z" level=info msg="connecting to shim 9411dc398e349b79cab5ea54559b62e64ba3571cefa1ed254e272c36441a62f2" address="unix:///run/containerd/s/6b8ce3e0be103c36dcc11c9c0a7dba437c0ea0247dbf601e166eade960293a63" protocol=ttrpc version=3 Nov 5 23:57:09.045653 systemd-networkd[1433]: cali535f0f23a35: Link UP Nov 5 23:57:09.047535 systemd-networkd[1433]: cali535f0f23a35: Gained carrier Nov 5 23:57:09.064551 systemd[1]: Started cri-containerd-9411dc398e349b79cab5ea54559b62e64ba3571cefa1ed254e272c36441a62f2.scope - libcontainer container 9411dc398e349b79cab5ea54559b62e64ba3571cefa1ed254e272c36441a62f2. Nov 5 23:57:09.075781 systemd[1]: Started cri-containerd-f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8.scope - libcontainer container f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8. Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.738 [INFO][4208] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.771 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0 calico-kube-controllers-5f4d5b4c8b- calico-system dcc72545-d360-40c8-82d4-47a2a498215e 796 0 2025-11-05 23:56:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f4d5b4c8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f4d5b4c8b-52fcx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali535f0f23a35 [] [] }} ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.771 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.851 [INFO][4271] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" HandleID="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Workload="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.852 [INFO][4271] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" HandleID="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Workload="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000314760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f4d5b4c8b-52fcx", "timestamp":"2025-11-05 23:57:08.851286265 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.854 [INFO][4271] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.927 [INFO][4271] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.930 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:08.995 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.005 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.017 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.019 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.022 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.022 [INFO][4271] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.026 [INFO][4271] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460 Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.030 [INFO][4271] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.037 [INFO][4271] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.038 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" host="localhost" Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.038 [INFO][4271] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:09.093678 containerd[1536]: 2025-11-05 23:57:09.038 [INFO][4271] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" HandleID="k8s-pod-network.e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Workload="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" Nov 5 23:57:09.095665 containerd[1536]: 2025-11-05 23:57:09.042 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0", GenerateName:"calico-kube-controllers-5f4d5b4c8b-", Namespace:"calico-system", SelfLink:"", UID:"dcc72545-d360-40c8-82d4-47a2a498215e", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4d5b4c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f4d5b4c8b-52fcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali535f0f23a35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.095665 containerd[1536]: 2025-11-05 23:57:09.042 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" Nov 5 23:57:09.095665 containerd[1536]: 2025-11-05 23:57:09.042 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali535f0f23a35 ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" Nov 5 23:57:09.095665 containerd[1536]: 2025-11-05 23:57:09.051 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" Nov 5 23:57:09.095665 containerd[1536]: 2025-11-05 23:57:09.054 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0", GenerateName:"calico-kube-controllers-5f4d5b4c8b-", Namespace:"calico-system", SelfLink:"", UID:"dcc72545-d360-40c8-82d4-47a2a498215e", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f4d5b4c8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460", Pod:"calico-kube-controllers-5f4d5b4c8b-52fcx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali535f0f23a35", MAC:"ae:11:df:d9:62:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.095665 containerd[1536]: 2025-11-05 23:57:09.075 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" Namespace="calico-system" Pod="calico-kube-controllers-5f4d5b4c8b-52fcx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f4d5b4c8b--52fcx-eth0" Nov 5 23:57:09.163532 containerd[1536]: time="2025-11-05T23:57:09.163171091Z" level=info msg="StartContainer for \"9411dc398e349b79cab5ea54559b62e64ba3571cefa1ed254e272c36441a62f2\" returns successfully" Nov 5 23:57:09.169983 systemd-networkd[1433]: cali994a708bda4: Link UP Nov 5 23:57:09.170194 systemd-networkd[1433]: cali994a708bda4: Gained carrier Nov 5 23:57:09.172116 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:09.174354 containerd[1536]: time="2025-11-05T23:57:09.174047857Z" level=info msg="connecting to shim e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460" address="unix:///run/containerd/s/fc14ca7bdbe66732e719e2b0a087bf9a3d36d313eb01b1083ce34f6108f0f36d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:08.768 [INFO][4214] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:08.807 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lbl86-eth0 csi-node-driver- calico-system 7e0e0ade-490b-4bff-b3bc-5b351134410a 711 0 2025-11-05 23:56:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lbl86 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali994a708bda4 [] [] }} ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:08.807 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-eth0" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:08.862 [INFO][4278] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" HandleID="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Workload="localhost-k8s-csi--node--driver--lbl86-eth0" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:08.862 [INFO][4278] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" HandleID="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Workload="localhost-k8s-csi--node--driver--lbl86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lbl86", "timestamp":"2025-11-05 23:57:08.862524347 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:08.862 [INFO][4278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.038 [INFO][4278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.039 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.098 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.106 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.117 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.122 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.126 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.126 [INFO][4278] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.130 [INFO][4278] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.138 [INFO][4278] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.146 [INFO][4278] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.147 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" host="localhost" Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.147 [INFO][4278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:09.194573 containerd[1536]: 2025-11-05 23:57:09.148 [INFO][4278] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" HandleID="k8s-pod-network.b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Workload="localhost-k8s-csi--node--driver--lbl86-eth0" Nov 5 23:57:09.195240 containerd[1536]: 2025-11-05 23:57:09.159 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lbl86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e0e0ade-490b-4bff-b3bc-5b351134410a", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lbl86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali994a708bda4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.195240 containerd[1536]: 2025-11-05 23:57:09.160 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-eth0" Nov 5 23:57:09.195240 containerd[1536]: 2025-11-05 23:57:09.160 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali994a708bda4 ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-eth0" Nov 5 23:57:09.195240 containerd[1536]: 2025-11-05 23:57:09.171 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-eth0" Nov 5 23:57:09.195240 containerd[1536]: 2025-11-05 23:57:09.172 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lbl86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e0e0ade-490b-4bff-b3bc-5b351134410a", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf", Pod:"csi-node-driver-lbl86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali994a708bda4", MAC:"2e:ba:51:cf:9a:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.195240 containerd[1536]: 2025-11-05 23:57:09.189 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" Namespace="calico-system" Pod="csi-node-driver-lbl86" WorkloadEndpoint="localhost-k8s-csi--node--driver--lbl86-eth0" Nov 5 23:57:09.210632 systemd[1]: Started cri-containerd-e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460.scope - libcontainer container e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460. Nov 5 23:57:09.236239 containerd[1536]: time="2025-11-05T23:57:09.235939582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-22hgp,Uid:36e52d26-98b7-4979-90a9-0d6b22e8f358,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f60e663a12c9ea7ed0f86b2b59c878f89d0a09b142f62890c1b403bccedc91c8\"" Nov 5 23:57:09.243539 containerd[1536]: time="2025-11-05T23:57:09.243261238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:57:09.249625 containerd[1536]: time="2025-11-05T23:57:09.249234220Z" level=info msg="connecting to shim b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf" address="unix:///run/containerd/s/ce5fc2ac8ff190c990e03c656c2d78becca47ade6aed7811b1438cf3258c582d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:09.250766 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:09.293648 systemd[1]: Started cri-containerd-b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf.scope - libcontainer container b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf. Nov 5 23:57:09.296952 containerd[1536]: time="2025-11-05T23:57:09.296774350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f4d5b4c8b-52fcx,Uid:dcc72545-d360-40c8-82d4-47a2a498215e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0436ed9a6f1e4105e57d6f62f27e2678dbc6a11db94f611589dd89515a21460\"" Nov 5 23:57:09.307032 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:09.337468 containerd[1536]: time="2025-11-05T23:57:09.337388542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lbl86,Uid:7e0e0ade-490b-4bff-b3bc-5b351134410a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b97f0948ef0dc1c777ad469ccefc9b973626396e42bc1c6f4ec2287cd02a3ebf\"" Nov 5 23:57:09.452938 systemd-networkd[1433]: vxlan.calico: Link UP Nov 5 23:57:09.452943 systemd-networkd[1433]: vxlan.calico: Gained carrier Nov 5 23:57:09.481308 containerd[1536]: time="2025-11-05T23:57:09.480836730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:09.481810 containerd[1536]: time="2025-11-05T23:57:09.481765447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:57:09.482072 containerd[1536]: time="2025-11-05T23:57:09.481848766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:57:09.483371 kubelet[2659]: E1105 23:57:09.482197 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:09.483371 kubelet[2659]: E1105 23:57:09.482243 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:09.483371 kubelet[2659]: E1105 23:57:09.482425 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76787f47fb-22hgp_calico-apiserver(36e52d26-98b7-4979-90a9-0d6b22e8f358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:09.483371 kubelet[2659]: E1105 23:57:09.482498 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" podUID="36e52d26-98b7-4979-90a9-0d6b22e8f358" Nov 5 23:57:09.483592 containerd[1536]: time="2025-11-05T23:57:09.482851283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:57:09.668457 containerd[1536]: time="2025-11-05T23:57:09.668405298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjm2m,Uid:af9940bc-b869-4f48-b46d-3bd6b6993532,Namespace:calico-system,Attempt:0,}" Nov 5 23:57:09.669943 containerd[1536]: time="2025-11-05T23:57:09.669912174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-qqx58,Uid:91cfdc17-cd5c-4271-8452-f8f6eced611b,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:57:09.791787 systemd-networkd[1433]: caliec7857e7e87: Link UP Nov 5 23:57:09.792932 systemd-networkd[1433]: caliec7857e7e87: Gained carrier Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.723 [INFO][4681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0 calico-apiserver-76787f47fb- calico-apiserver 91cfdc17-cd5c-4271-8452-f8f6eced611b 805 0 2025-11-05 23:56:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76787f47fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76787f47fb-qqx58 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec7857e7e87 [] [] }} ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.724 [INFO][4681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.751 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" HandleID="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Workload="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.751 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" HandleID="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Workload="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76787f47fb-qqx58", "timestamp":"2025-11-05 23:57:09.751090758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.751 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.751 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.751 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.762 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.767 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.771 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.773 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.776 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.776 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.777 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8 Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.781 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.786 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.786 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" host="localhost" Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.786 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:09.807822 containerd[1536]: 2025-11-05 23:57:09.786 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" HandleID="k8s-pod-network.0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Workload="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" Nov 5 23:57:09.808313 containerd[1536]: 2025-11-05 23:57:09.789 [INFO][4681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0", GenerateName:"calico-apiserver-76787f47fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"91cfdc17-cd5c-4271-8452-f8f6eced611b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76787f47fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76787f47fb-qqx58", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec7857e7e87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.808313 containerd[1536]: 2025-11-05 23:57:09.789 [INFO][4681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" Nov 5 23:57:09.808313 containerd[1536]: 2025-11-05 23:57:09.789 [INFO][4681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec7857e7e87 ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" Nov 5 23:57:09.808313 containerd[1536]: 2025-11-05 23:57:09.793 [INFO][4681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" Nov 5 23:57:09.808313 containerd[1536]: 2025-11-05 23:57:09.793 [INFO][4681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0", GenerateName:"calico-apiserver-76787f47fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"91cfdc17-cd5c-4271-8452-f8f6eced611b", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76787f47fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8", Pod:"calico-apiserver-76787f47fb-qqx58", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec7857e7e87", MAC:"66:b5:4d:0e:52:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.808313 containerd[1536]: 2025-11-05 23:57:09.805 [INFO][4681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" Namespace="calico-apiserver" Pod="calico-apiserver-76787f47fb-qqx58" WorkloadEndpoint="localhost-k8s-calico--apiserver--76787f47fb--qqx58-eth0" Nov 5 23:57:09.829537 containerd[1536]: time="2025-11-05T23:57:09.829464111Z" level=info msg="connecting to shim 0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8" address="unix:///run/containerd/s/8b092da8788776067fa13bb80a2bce6e77b52dd08db7912fdae078a18725de23" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:09.830978 kubelet[2659]: E1105 23:57:09.830528 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" podUID="36e52d26-98b7-4979-90a9-0d6b22e8f358" Nov 5 23:57:09.859835 kubelet[2659]: I1105 23:57:09.859766 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fk6hd" podStartSLOduration=37.859729815 podStartE2EDuration="37.859729815s" podCreationTimestamp="2025-11-05 23:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:57:09.85827398 +0000 UTC m=+43.288166140" watchObservedRunningTime="2025-11-05 23:57:09.859729815 +0000 UTC m=+43.289621975" Nov 5 23:57:09.860616 systemd[1]: Started cri-containerd-0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8.scope - libcontainer container 0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8. Nov 5 23:57:09.880048 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:09.902255 systemd-networkd[1433]: caliabe8aaa7d60: Link UP Nov 5 23:57:09.903728 systemd-networkd[1433]: caliabe8aaa7d60: Gained carrier Nov 5 23:57:09.905273 containerd[1536]: time="2025-11-05T23:57:09.905235632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76787f47fb-qqx58,Uid:91cfdc17-cd5c-4271-8452-f8f6eced611b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0bc2447bb82a96dedee839288482efb1fadd44af90cd07db97d751fc32d2b2f8\"" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.723 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--zjm2m-eth0 goldmane-7c778bb748- calico-system af9940bc-b869-4f48-b46d-3bd6b6993532 802 0 2025-11-05 23:56:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-zjm2m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliabe8aaa7d60 [] [] }} ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.724 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.752 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" HandleID="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Workload="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.752 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" HandleID="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Workload="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000129e10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-zjm2m", "timestamp":"2025-11-05 23:57:09.752590913 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.753 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.786 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.787 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.864 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.873 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.879 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.881 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.884 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.884 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.885 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.889 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.896 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.896 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" host="localhost" Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.896 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:09.918869 containerd[1536]: 2025-11-05 23:57:09.896 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" HandleID="k8s-pod-network.54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Workload="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" Nov 5 23:57:09.919457 containerd[1536]: 2025-11-05 23:57:09.898 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--zjm2m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"af9940bc-b869-4f48-b46d-3bd6b6993532", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-zjm2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliabe8aaa7d60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.919457 containerd[1536]: 2025-11-05 23:57:09.899 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" Nov 5 23:57:09.919457 containerd[1536]: 2025-11-05 23:57:09.899 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabe8aaa7d60 ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" Nov 5 23:57:09.919457 containerd[1536]: 2025-11-05 23:57:09.904 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" Nov 5 23:57:09.919457 containerd[1536]: 2025-11-05 23:57:09.904 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--zjm2m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"af9940bc-b869-4f48-b46d-3bd6b6993532", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae", Pod:"goldmane-7c778bb748-zjm2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliabe8aaa7d60", MAC:"3a:90:0c:cf:e5:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:09.919457 containerd[1536]: 2025-11-05 23:57:09.915 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" Namespace="calico-system" Pod="goldmane-7c778bb748-zjm2m" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjm2m-eth0" Nov 5 23:57:09.935096 containerd[1536]: time="2025-11-05T23:57:09.934599699Z" level=info msg="connecting to shim 54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae" address="unix:///run/containerd/s/78cb395a1e3cfefd7d9f3575c43aa3376de973ed2f63064b1b9c4b4262636355" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:09.962645 systemd[1]: Started cri-containerd-54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae.scope - libcontainer container 54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae. Nov 5 23:57:09.973506 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:09.992815 containerd[1536]: time="2025-11-05T23:57:09.992774236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjm2m,Uid:af9940bc-b869-4f48-b46d-3bd6b6993532,Namespace:calico-system,Attempt:0,} returns sandbox id \"54720f6a38485cd589036fae5e73ca65e12bc8a360ec7c78395fd9195f2762ae\"" Nov 5 23:57:10.157631 systemd-networkd[1433]: calic503a421c0e: Gained IPv6LL Nov 5 23:57:10.181396 containerd[1536]: time="2025-11-05T23:57:10.181325597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:10.182331 containerd[1536]: time="2025-11-05T23:57:10.182299755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:57:10.182481 containerd[1536]: time="2025-11-05T23:57:10.182376434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:57:10.182587 kubelet[2659]: E1105 23:57:10.182546 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:57:10.182634 kubelet[2659]: E1105 23:57:10.182594 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:57:10.182795 kubelet[2659]: E1105 23:57:10.182765 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f4d5b4c8b-52fcx_calico-system(dcc72545-d360-40c8-82d4-47a2a498215e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:10.182870 kubelet[2659]: E1105 23:57:10.182810 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" podUID="dcc72545-d360-40c8-82d4-47a2a498215e" Nov 5 23:57:10.182990 containerd[1536]: time="2025-11-05T23:57:10.182956953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:57:10.221828 systemd-networkd[1433]: cali535f0f23a35: Gained IPv6LL Nov 5 23:57:10.285583 systemd-networkd[1433]: cali9888e42a21e: Gained IPv6LL Nov 5 23:57:10.285876 systemd-networkd[1433]: cali994a708bda4: Gained IPv6LL Nov 5 23:57:10.388238 containerd[1536]: time="2025-11-05T23:57:10.388181826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:10.389076 containerd[1536]: time="2025-11-05T23:57:10.389042784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:57:10.389127 containerd[1536]: time="2025-11-05T23:57:10.389093304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:57:10.389518 kubelet[2659]: E1105 23:57:10.389284 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:57:10.389518 kubelet[2659]: E1105 23:57:10.389345 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:57:10.389628 kubelet[2659]: E1105 23:57:10.389533 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lbl86_calico-system(7e0e0ade-490b-4bff-b3bc-5b351134410a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:10.389869 containerd[1536]: time="2025-11-05T23:57:10.389832141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:57:10.605706 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Nov 5 23:57:10.611104 containerd[1536]: time="2025-11-05T23:57:10.611026968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:10.623504 containerd[1536]: time="2025-11-05T23:57:10.623455611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:57:10.623610 containerd[1536]: time="2025-11-05T23:57:10.623524971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:57:10.623735 kubelet[2659]: E1105 23:57:10.623692 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:10.623783 kubelet[2659]: E1105 23:57:10.623745 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:10.624242 kubelet[2659]: E1105 23:57:10.623937 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76787f47fb-qqx58_calico-apiserver(91cfdc17-cd5c-4271-8452-f8f6eced611b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:10.624242 kubelet[2659]: E1105 23:57:10.623988 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" podUID="91cfdc17-cd5c-4271-8452-f8f6eced611b" Nov 5 23:57:10.624407 containerd[1536]: time="2025-11-05T23:57:10.624048169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:57:10.840646 kubelet[2659]: E1105 23:57:10.840601 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" podUID="91cfdc17-cd5c-4271-8452-f8f6eced611b" Nov 5 23:57:10.841575 containerd[1536]: time="2025-11-05T23:57:10.841480367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:10.842765 kubelet[2659]: E1105 23:57:10.842692 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" podUID="dcc72545-d360-40c8-82d4-47a2a498215e" Nov 5 23:57:10.843262 kubelet[2659]: E1105 23:57:10.843224 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" podUID="36e52d26-98b7-4979-90a9-0d6b22e8f358" Nov 5 23:57:10.843527 containerd[1536]: time="2025-11-05T23:57:10.843415761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:57:10.843527 containerd[1536]: time="2025-11-05T23:57:10.843444761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:57:10.843950 kubelet[2659]: E1105 23:57:10.843630 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:57:10.843950 kubelet[2659]: E1105 23:57:10.843817 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:57:10.844023 kubelet[2659]: E1105 23:57:10.844005 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zjm2m_calico-system(af9940bc-b869-4f48-b46d-3bd6b6993532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:10.844048 kubelet[2659]: E1105 23:57:10.844032 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjm2m" podUID="af9940bc-b869-4f48-b46d-3bd6b6993532" Nov 5 23:57:10.844573 containerd[1536]: time="2025-11-05T23:57:10.844299238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:57:11.033630 containerd[1536]: time="2025-11-05T23:57:11.033499565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:11.034749 containerd[1536]: time="2025-11-05T23:57:11.034652922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:57:11.034749 containerd[1536]: time="2025-11-05T23:57:11.034714682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:57:11.034901 kubelet[2659]: E1105 23:57:11.034863 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:57:11.034956 kubelet[2659]: E1105 23:57:11.034909 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:57:11.035024 kubelet[2659]: E1105 23:57:11.034991 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lbl86_calico-system(7e0e0ade-490b-4bff-b3bc-5b351134410a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:11.035085 kubelet[2659]: E1105 23:57:11.035037 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:57:11.117675 systemd-networkd[1433]: caliabe8aaa7d60: Gained IPv6LL Nov 5 23:57:11.757613 systemd-networkd[1433]: caliec7857e7e87: Gained IPv6LL Nov 5 23:57:11.827672 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:42396.service - OpenSSH per-connection server daemon (10.0.0.1:42396). Nov 5 23:57:11.843219 kubelet[2659]: E1105 23:57:11.841948 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjm2m" podUID="af9940bc-b869-4f48-b46d-3bd6b6993532" Nov 5 23:57:11.843219 kubelet[2659]: E1105 23:57:11.843024 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:57:11.844250 kubelet[2659]: E1105 23:57:11.844198 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" podUID="91cfdc17-cd5c-4271-8452-f8f6eced611b" Nov 5 23:57:11.910561 sshd[4836]: Accepted publickey for core from 10.0.0.1 port 42396 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:11.911380 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:11.915988 systemd-logind[1514]: New session 9 of user core. Nov 5 23:57:11.922592 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 23:57:12.106316 sshd[4839]: Connection closed by 10.0.0.1 port 42396 Nov 5 23:57:12.106718 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:12.110522 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:42396.service: Deactivated successfully. Nov 5 23:57:12.114547 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 23:57:12.116610 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Nov 5 23:57:12.118685 systemd-logind[1514]: Removed session 9. Nov 5 23:57:12.669098 containerd[1536]: time="2025-11-05T23:57:12.669057910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwpwp,Uid:1ef36e51-b849-4bd7-bfa2-ec554b62d4fd,Namespace:kube-system,Attempt:0,}" Nov 5 23:57:12.794338 systemd-networkd[1433]: cali7eedc2af595: Link UP Nov 5 23:57:12.794614 systemd-networkd[1433]: cali7eedc2af595: Gained carrier Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.719 [INFO][4856] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--qwpwp-eth0 coredns-66bc5c9577- kube-system 1ef36e51-b849-4bd7-bfa2-ec554b62d4fd 803 0 2025-11-05 23:56:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-qwpwp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7eedc2af595 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.719 [INFO][4856] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.750 [INFO][4870] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" HandleID="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Workload="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.751 [INFO][4870] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" HandleID="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Workload="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d520), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-qwpwp", "timestamp":"2025-11-05 23:57:12.750751138 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.751 [INFO][4870] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.751 [INFO][4870] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.751 [INFO][4870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.761 [INFO][4870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.766 [INFO][4870] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.771 [INFO][4870] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.773 [INFO][4870] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.777 [INFO][4870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.777 [INFO][4870] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.778 [INFO][4870] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2 Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.782 [INFO][4870] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.790 [INFO][4870] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.790 [INFO][4870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" host="localhost" Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.790 [INFO][4870] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:57:12.811486 containerd[1536]: 2025-11-05 23:57:12.790 [INFO][4870] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" HandleID="k8s-pod-network.321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Workload="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" Nov 5 23:57:12.812142 containerd[1536]: 2025-11-05 23:57:12.792 [INFO][4856] cni-plugin/k8s.go 418: Populated endpoint ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qwpwp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1ef36e51-b849-4bd7-bfa2-ec554b62d4fd", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-qwpwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7eedc2af595", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:12.812142 containerd[1536]: 2025-11-05 23:57:12.792 [INFO][4856] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" Nov 5 23:57:12.812142 containerd[1536]: 2025-11-05 23:57:12.792 [INFO][4856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7eedc2af595 ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" Nov 5 23:57:12.812142 containerd[1536]: 2025-11-05 23:57:12.796 [INFO][4856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" Nov 5 23:57:12.812142 containerd[1536]: 2025-11-05 23:57:12.796 [INFO][4856] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qwpwp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1ef36e51-b849-4bd7-bfa2-ec554b62d4fd", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2", Pod:"coredns-66bc5c9577-qwpwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7eedc2af595", MAC:"16:7f:69:9b:aa:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:57:12.812142 containerd[1536]: 2025-11-05 23:57:12.808 [INFO][4856] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" Namespace="kube-system" Pod="coredns-66bc5c9577-qwpwp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qwpwp-eth0" Nov 5 23:57:12.845075 containerd[1536]: time="2025-11-05T23:57:12.844738974Z" level=info msg="connecting to shim 321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2" address="unix:///run/containerd/s/942c39a9d4ecb78c7c89d9c69fdfcd0b83181899c107615766dac14c7e0c8d9b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:57:12.871712 systemd[1]: Started cri-containerd-321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2.scope - libcontainer container 321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2. Nov 5 23:57:12.883425 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 23:57:12.921475 containerd[1536]: time="2025-11-05T23:57:12.921363655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qwpwp,Uid:1ef36e51-b849-4bd7-bfa2-ec554b62d4fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2\"" Nov 5 23:57:12.926929 containerd[1536]: time="2025-11-05T23:57:12.926889961Z" level=info msg="CreateContainer within sandbox \"321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:57:12.934883 containerd[1536]: time="2025-11-05T23:57:12.934844300Z" level=info msg="Container ae5d809dc37a586608cf3b4ab655c6886364856790d49e2ad75c843e3d521863: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:57:12.940662 containerd[1536]: time="2025-11-05T23:57:12.940619325Z" level=info msg="CreateContainer within sandbox \"321e08c3515e264cefe6aee1e2efd42bbd8d29b3b8721976719f8bad69f11cd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae5d809dc37a586608cf3b4ab655c6886364856790d49e2ad75c843e3d521863\"" Nov 5 23:57:12.941157 containerd[1536]: time="2025-11-05T23:57:12.941120164Z" level=info msg="StartContainer for \"ae5d809dc37a586608cf3b4ab655c6886364856790d49e2ad75c843e3d521863\"" Nov 5 23:57:12.942280 containerd[1536]: time="2025-11-05T23:57:12.942214641Z" level=info msg="connecting to shim ae5d809dc37a586608cf3b4ab655c6886364856790d49e2ad75c843e3d521863" address="unix:///run/containerd/s/942c39a9d4ecb78c7c89d9c69fdfcd0b83181899c107615766dac14c7e0c8d9b" protocol=ttrpc version=3 Nov 5 23:57:12.965612 systemd[1]: Started cri-containerd-ae5d809dc37a586608cf3b4ab655c6886364856790d49e2ad75c843e3d521863.scope - libcontainer container ae5d809dc37a586608cf3b4ab655c6886364856790d49e2ad75c843e3d521863. Nov 5 23:57:12.991737 containerd[1536]: time="2025-11-05T23:57:12.991695512Z" level=info msg="StartContainer for \"ae5d809dc37a586608cf3b4ab655c6886364856790d49e2ad75c843e3d521863\" returns successfully" Nov 5 23:57:13.869654 systemd-networkd[1433]: cali7eedc2af595: Gained IPv6LL Nov 5 23:57:13.884407 kubelet[2659]: I1105 23:57:13.882969 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qwpwp" podStartSLOduration=41.882953341 podStartE2EDuration="41.882953341s" podCreationTimestamp="2025-11-05 23:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:57:13.864220107 +0000 UTC m=+47.294112267" watchObservedRunningTime="2025-11-05 23:57:13.882953341 +0000 UTC m=+47.312845501" Nov 5 23:57:17.124940 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:42408.service - OpenSSH per-connection server daemon (10.0.0.1:42408). Nov 5 23:57:17.186011 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 42408 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:17.187741 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:17.192397 systemd-logind[1514]: New session 10 of user core. Nov 5 23:57:17.206664 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 23:57:17.359162 sshd[4986]: Connection closed by 10.0.0.1 port 42408 Nov 5 23:57:17.360481 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:17.370708 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:42408.service: Deactivated successfully. Nov 5 23:57:17.372663 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 23:57:17.373614 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Nov 5 23:57:17.375975 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:42422.service - OpenSSH per-connection server daemon (10.0.0.1:42422). Nov 5 23:57:17.376961 systemd-logind[1514]: Removed session 10. Nov 5 23:57:17.441114 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 42422 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:17.442995 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:17.447979 systemd-logind[1514]: New session 11 of user core. Nov 5 23:57:17.454595 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 23:57:17.618000 sshd[5003]: Connection closed by 10.0.0.1 port 42422 Nov 5 23:57:17.618735 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:17.627121 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:42422.service: Deactivated successfully. Nov 5 23:57:17.630280 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 23:57:17.634118 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Nov 5 23:57:17.637380 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). Nov 5 23:57:17.641483 systemd-logind[1514]: Removed session 11. Nov 5 23:57:17.692244 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:17.693725 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:17.698590 systemd-logind[1514]: New session 12 of user core. Nov 5 23:57:17.713679 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 23:57:17.867483 sshd[5020]: Connection closed by 10.0.0.1 port 42438 Nov 5 23:57:17.867038 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:17.870610 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:42438.service: Deactivated successfully. Nov 5 23:57:17.872385 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 23:57:17.873139 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Nov 5 23:57:17.874199 systemd-logind[1514]: Removed session 12. Nov 5 23:57:18.668170 containerd[1536]: time="2025-11-05T23:57:18.668087928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:57:18.876700 containerd[1536]: time="2025-11-05T23:57:18.876608841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:18.943554 containerd[1536]: time="2025-11-05T23:57:18.943410483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:57:18.943554 containerd[1536]: time="2025-11-05T23:57:18.943509963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:57:18.943958 kubelet[2659]: E1105 23:57:18.943798 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:57:18.943958 kubelet[2659]: E1105 23:57:18.943842 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:57:18.943958 kubelet[2659]: E1105 23:57:18.943907 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b85fb9cd9-k57j5_calico-system(d80b8472-e237-414c-aab6-2c809f90c36e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:18.945224 containerd[1536]: time="2025-11-05T23:57:18.944970360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:57:19.174222 containerd[1536]: time="2025-11-05T23:57:19.173055977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:19.218840 containerd[1536]: time="2025-11-05T23:57:19.218526862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:57:19.218840 containerd[1536]: time="2025-11-05T23:57:19.218711301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:57:19.218978 kubelet[2659]: E1105 23:57:19.218795 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:57:19.218978 kubelet[2659]: E1105 23:57:19.218841 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:57:19.218978 kubelet[2659]: E1105 23:57:19.218909 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b85fb9cd9-k57j5_calico-system(d80b8472-e237-414c-aab6-2c809f90c36e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:19.219055 kubelet[2659]: E1105 23:57:19.218947 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b85fb9cd9-k57j5" podUID="d80b8472-e237-414c-aab6-2c809f90c36e" Nov 5 23:57:22.668967 containerd[1536]: time="2025-11-05T23:57:22.668900576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:57:22.885443 containerd[1536]: time="2025-11-05T23:57:22.885332721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:22.886210 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:48344.service - OpenSSH per-connection server daemon (10.0.0.1:48344). Nov 5 23:57:22.887256 containerd[1536]: time="2025-11-05T23:57:22.887201679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:57:22.887323 containerd[1536]: time="2025-11-05T23:57:22.887297319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:57:22.889075 kubelet[2659]: E1105 23:57:22.887615 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:22.889075 kubelet[2659]: E1105 23:57:22.887675 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:22.889075 kubelet[2659]: E1105 23:57:22.887826 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76787f47fb-qqx58_calico-apiserver(91cfdc17-cd5c-4271-8452-f8f6eced611b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:22.889075 kubelet[2659]: E1105 23:57:22.887859 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" podUID="91cfdc17-cd5c-4271-8452-f8f6eced611b" Nov 5 23:57:22.889473 containerd[1536]: time="2025-11-05T23:57:22.888335157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:57:22.940026 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 48344 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:22.941966 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:22.946337 systemd-logind[1514]: New session 13 of user core. Nov 5 23:57:22.960633 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 23:57:23.094049 containerd[1536]: time="2025-11-05T23:57:23.093136566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:23.094919 containerd[1536]: time="2025-11-05T23:57:23.094857044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:57:23.095210 containerd[1536]: time="2025-11-05T23:57:23.094936684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:57:23.095385 kubelet[2659]: E1105 23:57:23.095335 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:57:23.095466 kubelet[2659]: E1105 23:57:23.095393 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:57:23.095510 kubelet[2659]: E1105 23:57:23.095477 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f4d5b4c8b-52fcx_calico-system(dcc72545-d360-40c8-82d4-47a2a498215e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:23.095568 kubelet[2659]: E1105 23:57:23.095515 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" podUID="dcc72545-d360-40c8-82d4-47a2a498215e" Nov 5 23:57:23.111911 sshd[5050]: Connection closed by 10.0.0.1 port 48344 Nov 5 23:57:23.112425 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:23.123256 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:48344.service: Deactivated successfully. Nov 5 23:57:23.125074 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 23:57:23.126014 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Nov 5 23:57:23.129344 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:48348.service - OpenSSH per-connection server daemon (10.0.0.1:48348). Nov 5 23:57:23.129935 systemd-logind[1514]: Removed session 13. Nov 5 23:57:23.181712 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 48348 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:23.183069 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:23.187843 systemd-logind[1514]: New session 14 of user core. Nov 5 23:57:23.196670 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 23:57:23.412071 sshd[5067]: Connection closed by 10.0.0.1 port 48348 Nov 5 23:57:23.411920 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:23.425160 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:48348.service: Deactivated successfully. Nov 5 23:57:23.426967 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 23:57:23.427774 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Nov 5 23:57:23.430531 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:48364.service - OpenSSH per-connection server daemon (10.0.0.1:48364). Nov 5 23:57:23.431241 systemd-logind[1514]: Removed session 14. Nov 5 23:57:23.493058 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 48364 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:23.494512 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:23.499162 systemd-logind[1514]: New session 15 of user core. Nov 5 23:57:23.515645 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 23:57:24.346811 sshd[5082]: Connection closed by 10.0.0.1 port 48364 Nov 5 23:57:24.347765 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:24.360071 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:48364.service: Deactivated successfully. Nov 5 23:57:24.365377 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 23:57:24.366845 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Nov 5 23:57:24.373304 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:48380.service - OpenSSH per-connection server daemon (10.0.0.1:48380). Nov 5 23:57:24.374978 systemd-logind[1514]: Removed session 15. Nov 5 23:57:24.439658 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 48380 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:24.441048 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:24.445492 systemd-logind[1514]: New session 16 of user core. Nov 5 23:57:24.452620 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 23:57:24.668310 containerd[1536]: time="2025-11-05T23:57:24.668125928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:57:24.745456 sshd[5103]: Connection closed by 10.0.0.1 port 48380 Nov 5 23:57:24.745324 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:24.752872 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:48380.service: Deactivated successfully. Nov 5 23:57:24.756066 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 23:57:24.757861 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Nov 5 23:57:24.760550 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:48390.service - OpenSSH per-connection server daemon (10.0.0.1:48390). Nov 5 23:57:24.764107 systemd-logind[1514]: Removed session 16. Nov 5 23:57:24.822263 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 48390 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:24.823887 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:24.827781 systemd-logind[1514]: New session 17 of user core. Nov 5 23:57:24.845666 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 23:57:24.870272 containerd[1536]: time="2025-11-05T23:57:24.870090886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:24.871198 containerd[1536]: time="2025-11-05T23:57:24.871094445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:57:24.871198 containerd[1536]: time="2025-11-05T23:57:24.871130525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:57:24.871371 kubelet[2659]: E1105 23:57:24.871312 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:24.871371 kubelet[2659]: E1105 23:57:24.871354 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:57:24.871699 kubelet[2659]: E1105 23:57:24.871509 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-76787f47fb-22hgp_calico-apiserver(36e52d26-98b7-4979-90a9-0d6b22e8f358): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:24.871699 kubelet[2659]: E1105 23:57:24.871554 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" podUID="36e52d26-98b7-4979-90a9-0d6b22e8f358" Nov 5 23:57:24.872186 containerd[1536]: time="2025-11-05T23:57:24.871946244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:57:24.981303 sshd[5117]: Connection closed by 10.0.0.1 port 48390 Nov 5 23:57:24.982086 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:24.985768 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:48390.service: Deactivated successfully. Nov 5 23:57:24.987696 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 23:57:24.988547 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Nov 5 23:57:24.989998 systemd-logind[1514]: Removed session 17. Nov 5 23:57:25.080622 containerd[1536]: time="2025-11-05T23:57:25.080571201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:25.081511 containerd[1536]: time="2025-11-05T23:57:25.081458720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:57:25.081572 containerd[1536]: time="2025-11-05T23:57:25.081519319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:57:25.081754 kubelet[2659]: E1105 23:57:25.081716 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:57:25.081806 kubelet[2659]: E1105 23:57:25.081763 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:57:25.081861 kubelet[2659]: E1105 23:57:25.081840 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zjm2m_calico-system(af9940bc-b869-4f48-b46d-3bd6b6993532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:25.081902 kubelet[2659]: E1105 23:57:25.081880 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjm2m" podUID="af9940bc-b869-4f48-b46d-3bd6b6993532" Nov 5 23:57:26.669226 containerd[1536]: time="2025-11-05T23:57:26.669083705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:57:26.885619 containerd[1536]: time="2025-11-05T23:57:26.885505997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:26.886552 containerd[1536]: time="2025-11-05T23:57:26.886521636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:57:26.886607 containerd[1536]: time="2025-11-05T23:57:26.886549516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:57:26.886744 kubelet[2659]: E1105 23:57:26.886705 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:57:26.887102 kubelet[2659]: E1105 23:57:26.886754 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:57:26.887102 kubelet[2659]: E1105 23:57:26.886819 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lbl86_calico-system(7e0e0ade-490b-4bff-b3bc-5b351134410a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:26.890442 containerd[1536]: time="2025-11-05T23:57:26.890388032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:57:27.110990 containerd[1536]: time="2025-11-05T23:57:27.110937607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:57:27.111929 containerd[1536]: time="2025-11-05T23:57:27.111891686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:57:27.112074 containerd[1536]: time="2025-11-05T23:57:27.111971966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:57:27.112171 kubelet[2659]: E1105 23:57:27.112132 2659 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:57:27.112247 kubelet[2659]: E1105 23:57:27.112196 2659 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:57:27.112313 kubelet[2659]: E1105 23:57:27.112293 2659 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lbl86_calico-system(7e0e0ade-490b-4bff-b3bc-5b351134410a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:57:27.112422 kubelet[2659]: E1105 23:57:27.112337 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:57:29.995913 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:36238.service - OpenSSH per-connection server daemon (10.0.0.1:36238). Nov 5 23:57:30.055922 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 36238 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:30.057305 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:30.063722 systemd-logind[1514]: New session 18 of user core. Nov 5 23:57:30.074748 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 23:57:30.212870 sshd[5146]: Connection closed by 10.0.0.1 port 36238 Nov 5 23:57:30.213186 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:30.217154 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:36238.service: Deactivated successfully. Nov 5 23:57:30.221015 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 23:57:30.227927 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Nov 5 23:57:30.228860 systemd-logind[1514]: Removed session 18. Nov 5 23:57:32.672785 kubelet[2659]: E1105 23:57:32.671509 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b85fb9cd9-k57j5" podUID="d80b8472-e237-414c-aab6-2c809f90c36e" Nov 5 23:57:33.904392 containerd[1536]: time="2025-11-05T23:57:33.902976202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0a7a692cee7b671aeb693d792b306006f3e7b53d3e6ed3260f97491a44a8153\" id:\"336ec44a366f161effdae0819ef783afd237d370c0c0f9a9ffe564011fb57b07\" pid:5174 exited_at:{seconds:1762387053 nanos:901341241}" Nov 5 23:57:35.226316 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:36242.service - OpenSSH per-connection server daemon (10.0.0.1:36242). Nov 5 23:57:35.286933 sshd[5187]: Accepted publickey for core from 10.0.0.1 port 36242 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:35.288310 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:35.292510 systemd-logind[1514]: New session 19 of user core. Nov 5 23:57:35.298607 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 23:57:35.453556 sshd[5190]: Connection closed by 10.0.0.1 port 36242 Nov 5 23:57:35.453383 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:35.457460 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Nov 5 23:57:35.457835 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:36242.service: Deactivated successfully. Nov 5 23:57:35.460596 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 23:57:35.462115 systemd-logind[1514]: Removed session 19. Nov 5 23:57:35.667390 kubelet[2659]: E1105 23:57:35.667317 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f4d5b4c8b-52fcx" podUID="dcc72545-d360-40c8-82d4-47a2a498215e" Nov 5 23:57:36.670635 kubelet[2659]: E1105 23:57:36.670591 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-qqx58" podUID="91cfdc17-cd5c-4271-8452-f8f6eced611b" Nov 5 23:57:36.671177 kubelet[2659]: E1105 23:57:36.670680 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjm2m" podUID="af9940bc-b869-4f48-b46d-3bd6b6993532" Nov 5 23:57:37.668540 kubelet[2659]: E1105 23:57:37.668446 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76787f47fb-22hgp" podUID="36e52d26-98b7-4979-90a9-0d6b22e8f358" Nov 5 23:57:38.671909 kubelet[2659]: E1105 23:57:38.671781 2659 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lbl86" podUID="7e0e0ade-490b-4bff-b3bc-5b351134410a" Nov 5 23:57:40.468925 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:33424.service - OpenSSH per-connection server daemon (10.0.0.1:33424). Nov 5 23:57:40.524321 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 33424 ssh2: RSA SHA256:y8QDtx1I2NVYRtkqadojlmwp5Ggjvm91KVwbHRQlRRI Nov 5 23:57:40.525468 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:57:40.529144 systemd-logind[1514]: New session 20 of user core. Nov 5 23:57:40.535568 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 23:57:40.674830 sshd[5209]: Connection closed by 10.0.0.1 port 33424 Nov 5 23:57:40.675170 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Nov 5 23:57:40.678622 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:33424.service: Deactivated successfully. Nov 5 23:57:40.680536 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 23:57:40.681155 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Nov 5 23:57:40.682098 systemd-logind[1514]: Removed session 20.