Nov 8 00:04:36.280518 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 8 00:04:36.280569 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:04:36.280596 kernel: KASLR disabled due to lack of seed Nov 8 00:04:36.280613 kernel: efi: EFI v2.7 by EDK II Nov 8 00:04:36.280631 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Nov 8 00:04:36.280649 kernel: ACPI: Early table checksum verification disabled Nov 8 00:04:36.280667 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 8 00:04:36.280683 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 8 00:04:36.280700 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 8 00:04:36.280715 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 8 00:04:36.280737 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 8 00:04:36.280754 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 8 00:04:36.280770 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 8 00:04:36.280786 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 8 00:04:36.280806 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 8 00:04:36.280829 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 8 00:04:36.280848 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 8 00:04:36.280865 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 8 00:04:36.280883 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 8 00:04:36.280900 kernel: printk: bootconsole [uart0] enabled Nov 8 00:04:36.280918 kernel: NUMA: Failed to initialise from firmware Nov 8 00:04:36.280937 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 8 00:04:36.280955 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Nov 8 00:04:36.280973 kernel: Zone ranges: Nov 8 00:04:36.280991 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 8 00:04:36.281009 kernel: DMA32 empty Nov 8 00:04:36.281033 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 8 00:04:36.281052 kernel: Movable zone start for each node Nov 8 00:04:36.281069 kernel: Early memory node ranges Nov 8 00:04:36.281086 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 8 00:04:36.281104 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 8 00:04:36.282550 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 8 00:04:36.282572 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 8 00:04:36.282591 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 8 00:04:36.282609 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 8 00:04:36.282628 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 8 00:04:36.282647 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 8 00:04:36.282665 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 8 00:04:36.282696 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 8 00:04:36.282715 kernel: psci: probing for conduit method from ACPI. Nov 8 00:04:36.282742 kernel: psci: PSCIv1.0 detected in firmware. Nov 8 00:04:36.282762 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:04:36.282780 kernel: psci: Trusted OS migration not required Nov 8 00:04:36.282804 kernel: psci: SMC Calling Convention v1.1 Nov 8 00:04:36.282823 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 8 00:04:36.282842 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:04:36.282861 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:04:36.282879 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:04:36.282898 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:04:36.282916 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:04:36.282934 kernel: CPU features: detected: Spectre-v2 Nov 8 00:04:36.282953 kernel: CPU features: detected: Spectre-v3a Nov 8 00:04:36.282971 kernel: CPU features: detected: Spectre-BHB Nov 8 00:04:36.282989 kernel: CPU features: detected: ARM erratum 1742098 Nov 8 00:04:36.283014 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 8 00:04:36.283033 kernel: alternatives: applying boot alternatives Nov 8 00:04:36.283053 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:04:36.283073 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:04:36.283091 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:04:36.283155 kernel: Fallback order for Node 0: 0 Nov 8 00:04:36.283184 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Nov 8 00:04:36.283202 kernel: Policy zone: Normal Nov 8 00:04:36.283220 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:04:36.283238 kernel: software IO TLB: area num 2. Nov 8 00:04:36.283256 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Nov 8 00:04:36.283282 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Nov 8 00:04:36.283301 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:04:36.283319 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:04:36.283338 kernel: rcu: RCU event tracing is enabled. Nov 8 00:04:36.283356 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:04:36.283374 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:04:36.283404 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:04:36.283430 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:04:36.283449 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:04:36.283468 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:04:36.283487 kernel: GICv3: 96 SPIs implemented Nov 8 00:04:36.283514 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:04:36.283535 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:04:36.283554 kernel: GICv3: GICv3 features: 16 PPIs Nov 8 00:04:36.283573 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 8 00:04:36.283591 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 8 00:04:36.283612 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Nov 8 00:04:36.283632 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Nov 8 00:04:36.283651 kernel: GICv3: using LPI property table @0x00000004000d0000 Nov 8 00:04:36.283670 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 8 00:04:36.283689 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Nov 8 00:04:36.283708 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:04:36.283725 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 8 00:04:36.283749 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 8 00:04:36.283768 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 8 00:04:36.283788 kernel: Console: colour dummy device 80x25 Nov 8 00:04:36.283807 kernel: printk: console [tty1] enabled Nov 8 00:04:36.283826 kernel: ACPI: Core revision 20230628 Nov 8 00:04:36.283846 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 8 00:04:36.283865 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:04:36.283885 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:04:36.283906 kernel: landlock: Up and running. Nov 8 00:04:36.283932 kernel: SELinux: Initializing. Nov 8 00:04:36.283952 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:04:36.283971 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:04:36.283990 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:04:36.284009 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:04:36.284027 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:04:36.284046 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:04:36.284065 kernel: Platform MSI: ITS@0x10080000 domain created Nov 8 00:04:36.284083 kernel: PCI/MSI: ITS@0x10080000 domain created Nov 8 00:04:36.289145 kernel: Remapping and enabling EFI services. Nov 8 00:04:36.289212 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:04:36.289236 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:04:36.289260 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 8 00:04:36.289282 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Nov 8 00:04:36.289303 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 8 00:04:36.289322 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:04:36.289346 kernel: SMP: Total of 2 processors activated. Nov 8 00:04:36.289364 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:04:36.289402 kernel: CPU features: detected: 32-bit EL1 Support Nov 8 00:04:36.289424 kernel: CPU features: detected: CRC32 instructions Nov 8 00:04:36.289443 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:04:36.289478 kernel: alternatives: applying system-wide alternatives Nov 8 00:04:36.289504 kernel: devtmpfs: initialized Nov 8 00:04:36.289524 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:04:36.289543 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:04:36.289563 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:04:36.289583 kernel: SMBIOS 3.0.0 present. Nov 8 00:04:36.289610 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 8 00:04:36.289629 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:04:36.289648 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:04:36.289668 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:04:36.289688 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:04:36.289708 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:04:36.289727 kernel: audit: type=2000 audit(0.300:1): state=initialized audit_enabled=0 res=1 Nov 8 00:04:36.289746 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:04:36.289772 kernel: cpuidle: using governor menu Nov 8 00:04:36.289791 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:04:36.289811 kernel: ASID allocator initialised with 65536 entries Nov 8 00:04:36.289831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:04:36.289851 kernel: Serial: AMBA PL011 UART driver Nov 8 00:04:36.289871 kernel: Modules: 17488 pages in range for non-PLT usage Nov 8 00:04:36.289890 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:04:36.289909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:04:36.289928 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:04:36.289953 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:04:36.289974 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:04:36.289994 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:04:36.290014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:04:36.290033 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:04:36.290053 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:04:36.290075 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:04:36.290094 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:04:36.290162 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:04:36.290197 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:04:36.290217 kernel: ACPI: Interpreter enabled Nov 8 00:04:36.290237 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:04:36.290256 kernel: ACPI: MCFG table detected, 1 entries Nov 8 00:04:36.290276 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 8 00:04:36.290608 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:04:36.290854 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 00:04:36.291087 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 00:04:36.293480 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 8 00:04:36.293755 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 8 00:04:36.293786 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 8 00:04:36.293807 kernel: acpiphp: Slot [1] registered Nov 8 00:04:36.293833 kernel: acpiphp: Slot [2] registered Nov 8 00:04:36.293855 kernel: acpiphp: Slot [3] registered Nov 8 00:04:36.293875 kernel: acpiphp: Slot [4] registered Nov 8 00:04:36.293895 kernel: acpiphp: Slot [5] registered Nov 8 00:04:36.293929 kernel: acpiphp: Slot [6] registered Nov 8 00:04:36.293951 kernel: acpiphp: Slot [7] registered Nov 8 00:04:36.293971 kernel: acpiphp: Slot [8] registered Nov 8 00:04:36.293990 kernel: acpiphp: Slot [9] registered Nov 8 00:04:36.294009 kernel: acpiphp: Slot [10] registered Nov 8 00:04:36.294030 kernel: acpiphp: Slot [11] registered Nov 8 00:04:36.294050 kernel: acpiphp: Slot [12] registered Nov 8 00:04:36.294070 kernel: acpiphp: Slot [13] registered Nov 8 00:04:36.294090 kernel: acpiphp: Slot [14] registered Nov 8 00:04:36.294158 kernel: acpiphp: Slot [15] registered Nov 8 00:04:36.294196 kernel: acpiphp: Slot [16] registered Nov 8 00:04:36.294218 kernel: acpiphp: Slot [17] registered Nov 8 00:04:36.294240 kernel: acpiphp: Slot [18] registered Nov 8 00:04:36.294260 kernel: acpiphp: Slot [19] registered Nov 8 00:04:36.294279 kernel: acpiphp: Slot [20] registered Nov 8 00:04:36.294298 kernel: acpiphp: Slot [21] registered Nov 8 00:04:36.294317 kernel: acpiphp: Slot [22] registered Nov 8 00:04:36.294337 kernel: acpiphp: Slot [23] registered Nov 8 00:04:36.294355 kernel: acpiphp: Slot [24] registered Nov 8 00:04:36.294380 kernel: acpiphp: Slot [25] registered Nov 8 00:04:36.294399 kernel: acpiphp: Slot [26] registered Nov 8 00:04:36.294419 kernel: acpiphp: Slot [27] registered Nov 8 00:04:36.294438 kernel: acpiphp: Slot [28] registered Nov 8 00:04:36.294457 kernel: acpiphp: Slot [29] registered Nov 8 00:04:36.294476 kernel: acpiphp: Slot [30] registered Nov 8 00:04:36.294495 kernel: acpiphp: Slot [31] registered Nov 8 00:04:36.294514 kernel: PCI host bridge to bus 0000:00 Nov 8 00:04:36.294810 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 8 00:04:36.295057 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 8 00:04:36.297768 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 8 00:04:36.298004 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 8 00:04:36.298331 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Nov 8 00:04:36.298610 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Nov 8 00:04:36.298862 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Nov 8 00:04:36.299233 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 8 00:04:36.299486 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Nov 8 00:04:36.299738 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:04:36.300022 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 8 00:04:36.300339 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Nov 8 00:04:36.315663 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Nov 8 00:04:36.315881 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Nov 8 00:04:36.316157 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:04:36.316394 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Nov 8 00:04:36.316639 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Nov 8 00:04:36.316880 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Nov 8 00:04:36.319167 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Nov 8 00:04:36.319484 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Nov 8 00:04:36.319683 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 8 00:04:36.319879 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 8 00:04:36.320071 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 8 00:04:36.320098 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 8 00:04:36.320140 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 8 00:04:36.320163 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 8 00:04:36.320183 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 8 00:04:36.320202 kernel: iommu: Default domain type: Translated Nov 8 00:04:36.320222 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:04:36.320247 kernel: efivars: Registered efivars operations Nov 8 00:04:36.320267 kernel: vgaarb: loaded Nov 8 00:04:36.320286 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:04:36.320304 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:04:36.320324 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:04:36.320343 kernel: pnp: PnP ACPI init Nov 8 00:04:36.320589 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 8 00:04:36.320618 kernel: pnp: PnP ACPI: found 1 devices Nov 8 00:04:36.320638 kernel: NET: Registered PF_INET protocol family Nov 8 00:04:36.320664 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:04:36.320683 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:04:36.320703 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:04:36.320722 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:04:36.320740 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:04:36.320760 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:04:36.320779 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:04:36.320798 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:04:36.320828 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:04:36.320857 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:04:36.320877 kernel: kvm [1]: HYP mode not available Nov 8 00:04:36.320896 kernel: Initialise system trusted keyrings Nov 8 00:04:36.320915 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:04:36.320934 kernel: Key type asymmetric registered Nov 8 00:04:36.320953 kernel: Asymmetric key parser 'x509' registered Nov 8 00:04:36.320972 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:04:36.320991 kernel: io scheduler mq-deadline registered Nov 8 00:04:36.321010 kernel: io scheduler kyber registered Nov 8 00:04:36.321034 kernel: io scheduler bfq registered Nov 8 00:04:36.323052 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 8 00:04:36.323100 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 8 00:04:36.323194 kernel: ACPI: button: Power Button [PWRB] Nov 8 00:04:36.323217 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 8 00:04:36.323238 kernel: ACPI: button: Sleep Button [SLPB] Nov 8 00:04:36.323259 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:04:36.323279 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 8 00:04:36.323569 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 8 00:04:36.323608 kernel: printk: console [ttyS0] disabled Nov 8 00:04:36.323630 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 8 00:04:36.323650 kernel: printk: console [ttyS0] enabled Nov 8 00:04:36.323670 kernel: printk: bootconsole [uart0] disabled Nov 8 00:04:36.323691 kernel: thunder_xcv, ver 1.0 Nov 8 00:04:36.323711 kernel: thunder_bgx, ver 1.0 Nov 8 00:04:36.323730 kernel: nicpf, ver 1.0 Nov 8 00:04:36.323749 kernel: nicvf, ver 1.0 Nov 8 00:04:36.324035 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:04:36.324331 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:04:35 UTC (1762560275) Nov 8 00:04:36.324367 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:04:36.324389 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Nov 8 00:04:36.324409 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:04:36.324450 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:04:36.324477 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:04:36.324499 kernel: Segment Routing with IPv6 Nov 8 00:04:36.324530 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:04:36.324551 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:04:36.324571 kernel: Key type dns_resolver registered Nov 8 00:04:36.324591 kernel: registered taskstats version 1 Nov 8 00:04:36.324611 kernel: Loading compiled-in X.509 certificates Nov 8 00:04:36.324631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:04:36.324650 kernel: Key type .fscrypt registered Nov 8 00:04:36.324669 kernel: Key type fscrypt-provisioning registered Nov 8 00:04:36.324687 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:04:36.324712 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:04:36.324732 kernel: ima: No architecture policies found Nov 8 00:04:36.324751 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:04:36.324770 kernel: clk: Disabling unused clocks Nov 8 00:04:36.324789 kernel: Freeing unused kernel memory: 39424K Nov 8 00:04:36.324808 kernel: Run /init as init process Nov 8 00:04:36.324828 kernel: with arguments: Nov 8 00:04:36.324847 kernel: /init Nov 8 00:04:36.324866 kernel: with environment: Nov 8 00:04:36.324886 kernel: HOME=/ Nov 8 00:04:36.324912 kernel: TERM=linux Nov 8 00:04:36.324939 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:04:36.324964 systemd[1]: Detected virtualization amazon. Nov 8 00:04:36.324985 systemd[1]: Detected architecture arm64. Nov 8 00:04:36.325005 systemd[1]: Running in initrd. Nov 8 00:04:36.325026 systemd[1]: No hostname configured, using default hostname. Nov 8 00:04:36.325047 systemd[1]: Hostname set to . Nov 8 00:04:36.325081 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:04:36.325103 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:04:36.328734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:36.328760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:36.328784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:04:36.328806 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:04:36.328842 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:04:36.328865 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:04:36.328900 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:04:36.328922 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:04:36.328943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:36.328964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:36.328985 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:04:36.329006 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:04:36.329026 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:04:36.329053 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:04:36.329074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:04:36.329095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:04:36.329157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:04:36.329182 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:04:36.329205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:36.329226 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:36.329248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:36.329276 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:04:36.329299 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:04:36.329320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:04:36.329342 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:04:36.329364 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:04:36.329387 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:04:36.329408 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:04:36.329431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:36.329453 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:04:36.329482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:36.329505 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:04:36.329528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:04:36.329609 systemd-journald[250]: Collecting audit messages is disabled. Nov 8 00:04:36.329662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:36.329684 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:36.329706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:04:36.329727 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:04:36.329753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:04:36.329773 kernel: Bridge firewalling registered Nov 8 00:04:36.329794 systemd-journald[250]: Journal started Nov 8 00:04:36.329832 systemd-journald[250]: Runtime Journal (/run/log/journal/ec248ab318588033392ba4d8bd334629) is 8.0M, max 75.3M, 67.3M free. Nov 8 00:04:36.266289 systemd-modules-load[251]: Inserted module 'overlay' Nov 8 00:04:36.327710 systemd-modules-load[251]: Inserted module 'br_netfilter' Nov 8 00:04:36.340350 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:04:36.342376 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:36.361375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:04:36.377686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:04:36.385503 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:36.396332 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:36.409522 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:04:36.428334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:36.437176 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:36.455580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:04:36.464506 dracut-cmdline[283]: dracut-dracut-053 Nov 8 00:04:36.469286 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:04:36.549668 systemd-resolved[292]: Positive Trust Anchors: Nov 8 00:04:36.549709 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:04:36.549775 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:04:36.637160 kernel: SCSI subsystem initialized Nov 8 00:04:36.644163 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:04:36.658168 kernel: iscsi: registered transport (tcp) Nov 8 00:04:36.681810 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:04:36.681889 kernel: QLogic iSCSI HBA Driver Nov 8 00:04:36.773229 kernel: random: crng init done Nov 8 00:04:36.773551 systemd-resolved[292]: Defaulting to hostname 'linux'. Nov 8 00:04:36.778090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:04:36.784854 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:36.810505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:04:36.819447 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:04:36.868413 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:04:36.868508 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:04:36.868552 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:04:36.940184 kernel: raid6: neonx8 gen() 6685 MB/s Nov 8 00:04:36.957169 kernel: raid6: neonx4 gen() 6369 MB/s Nov 8 00:04:36.975172 kernel: raid6: neonx2 gen() 5373 MB/s Nov 8 00:04:36.993166 kernel: raid6: neonx1 gen() 3886 MB/s Nov 8 00:04:37.010173 kernel: raid6: int64x8 gen() 3810 MB/s Nov 8 00:04:37.027174 kernel: raid6: int64x4 gen() 3694 MB/s Nov 8 00:04:37.044166 kernel: raid6: int64x2 gen() 3565 MB/s Nov 8 00:04:37.062464 kernel: raid6: int64x1 gen() 2745 MB/s Nov 8 00:04:37.062539 kernel: raid6: using algorithm neonx8 gen() 6685 MB/s Nov 8 00:04:37.081348 kernel: raid6: .... xor() 4801 MB/s, rmw enabled Nov 8 00:04:37.081431 kernel: raid6: using neon recovery algorithm Nov 8 00:04:37.091008 kernel: xor: measuring software checksum speed Nov 8 00:04:37.091086 kernel: 8regs : 10993 MB/sec Nov 8 00:04:37.093613 kernel: 32regs : 11071 MB/sec Nov 8 00:04:37.093679 kernel: arm64_neon : 9195 MB/sec Nov 8 00:04:37.093705 kernel: xor: using function: 32regs (11071 MB/sec) Nov 8 00:04:37.181172 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:04:37.203619 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:04:37.217494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:37.263407 systemd-udevd[470]: Using default interface naming scheme 'v255'. Nov 8 00:04:37.271874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:37.299472 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:04:37.334213 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Nov 8 00:04:37.396380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:04:37.409439 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:04:37.537692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:37.561226 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:04:37.612463 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:04:37.621089 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:04:37.627157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:37.630087 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:04:37.647441 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:04:37.693981 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:04:37.771911 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 8 00:04:37.771984 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 8 00:04:37.786171 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 8 00:04:37.786556 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 8 00:04:37.787034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:04:37.787368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:37.797070 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:37.799728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:37.805223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:37.812747 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:37.823174 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a4:09:39:6a:d1 Nov 8 00:04:37.824712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:37.828716 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:04:37.851783 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 8 00:04:37.851865 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 8 00:04:37.866151 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 8 00:04:37.873301 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:04:37.873387 kernel: GPT:9289727 != 33554431 Nov 8 00:04:37.873416 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:04:37.874484 kernel: GPT:9289727 != 33554431 Nov 8 00:04:37.874548 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:04:37.874575 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:04:37.880330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:37.892450 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:04:37.932613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:38.002192 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (537) Nov 8 00:04:38.020165 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (523) Nov 8 00:04:38.130614 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 8 00:04:38.148751 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 8 00:04:38.164858 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 8 00:04:38.168576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 8 00:04:38.190464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:04:38.202521 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:04:38.217460 disk-uuid[663]: Primary Header is updated. Nov 8 00:04:38.217460 disk-uuid[663]: Secondary Entries is updated. Nov 8 00:04:38.217460 disk-uuid[663]: Secondary Header is updated. Nov 8 00:04:38.230228 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:04:38.240191 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:04:38.247146 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:04:39.247273 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:04:39.249759 disk-uuid[664]: The operation has completed successfully. Nov 8 00:04:39.464453 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:04:39.467202 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:04:39.515454 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:04:39.540324 sh[1008]: Success Nov 8 00:04:39.567157 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:04:39.669360 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:04:39.678338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:04:39.686729 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:04:39.726686 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:04:39.726772 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:39.726800 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:04:39.728622 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:04:39.730042 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:04:39.893170 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:04:39.921260 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:04:39.926067 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:04:39.940359 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:04:39.953541 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:04:39.973141 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:39.973219 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:39.973248 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:04:39.983166 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:04:40.002433 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:04:40.006341 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:40.022293 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:04:40.040567 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:04:40.162569 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:04:40.184543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:04:40.241460 systemd-networkd[1200]: lo: Link UP Nov 8 00:04:40.241949 systemd-networkd[1200]: lo: Gained carrier Nov 8 00:04:40.245374 systemd-networkd[1200]: Enumeration completed Nov 8 00:04:40.245693 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:04:40.246747 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:40.246754 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:04:40.253468 systemd[1]: Reached target network.target - Network. Nov 8 00:04:40.261036 systemd-networkd[1200]: eth0: Link UP Nov 8 00:04:40.261044 systemd-networkd[1200]: eth0: Gained carrier Nov 8 00:04:40.261064 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:40.286241 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.28.187/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:04:40.495193 ignition[1111]: Ignition 2.19.0 Nov 8 00:04:40.495223 ignition[1111]: Stage: fetch-offline Nov 8 00:04:40.499464 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:40.499505 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:40.501948 ignition[1111]: Ignition finished successfully Nov 8 00:04:40.508340 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:04:40.520597 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:04:40.548693 ignition[1211]: Ignition 2.19.0 Nov 8 00:04:40.548722 ignition[1211]: Stage: fetch Nov 8 00:04:40.550565 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:40.550594 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:40.551368 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:40.567081 ignition[1211]: PUT result: OK Nov 8 00:04:40.574098 ignition[1211]: parsed url from cmdline: "" Nov 8 00:04:40.574150 ignition[1211]: no config URL provided Nov 8 00:04:40.574170 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:04:40.574199 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:04:40.574235 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:40.583590 ignition[1211]: PUT result: OK Nov 8 00:04:40.585131 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 8 00:04:40.588063 ignition[1211]: GET result: OK Nov 8 00:04:40.589908 ignition[1211]: parsing config with SHA512: 5477b4adef0a0e21cc04797458991cda7da7fce4e9a48699e08b0e829b09e7593b3d4c377c697f8d768ee5379f0a40d2dda4265d302bd5387c901980ba21b578 Nov 8 00:04:40.600789 unknown[1211]: fetched base config from "system" Nov 8 00:04:40.601420 unknown[1211]: fetched base config from "system" Nov 8 00:04:40.602340 ignition[1211]: fetch: fetch complete Nov 8 00:04:40.601436 unknown[1211]: fetched user config from "aws" Nov 8 00:04:40.602352 ignition[1211]: fetch: fetch passed Nov 8 00:04:40.602456 ignition[1211]: Ignition finished successfully Nov 8 00:04:40.615446 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:04:40.627459 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:04:40.656401 ignition[1218]: Ignition 2.19.0 Nov 8 00:04:40.656954 ignition[1218]: Stage: kargs Nov 8 00:04:40.657721 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:40.657749 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:40.657910 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:40.666946 ignition[1218]: PUT result: OK Nov 8 00:04:40.671958 ignition[1218]: kargs: kargs passed Nov 8 00:04:40.672083 ignition[1218]: Ignition finished successfully Nov 8 00:04:40.676834 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:04:40.688577 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:04:40.721468 ignition[1224]: Ignition 2.19.0 Nov 8 00:04:40.721490 ignition[1224]: Stage: disks Nov 8 00:04:40.722706 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:40.722739 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:40.722917 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:40.733780 ignition[1224]: PUT result: OK Nov 8 00:04:40.739027 ignition[1224]: disks: disks passed Nov 8 00:04:40.739176 ignition[1224]: Ignition finished successfully Nov 8 00:04:40.746178 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:04:40.746744 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:04:40.755218 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:04:40.758047 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:04:40.760444 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:04:40.762857 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:04:40.775542 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:04:40.827706 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:04:40.831495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:04:40.848293 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:04:40.940186 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:04:40.942367 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:04:40.946521 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:04:40.963475 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:04:40.972797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:04:40.981600 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:04:40.981727 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:04:40.981784 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:04:41.009413 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1251) Nov 8 00:04:41.009463 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:41.009493 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:41.009520 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:04:41.016017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:04:41.024528 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:04:41.033732 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:04:41.038100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:04:41.447259 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:04:41.469225 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:04:41.490812 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:04:41.500336 systemd-networkd[1200]: eth0: Gained IPv6LL Nov 8 00:04:41.503785 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:04:41.818399 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:04:41.828359 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:04:41.838430 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:04:41.857732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:04:41.862154 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:41.902677 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:04:41.921355 ignition[1364]: INFO : Ignition 2.19.0 Nov 8 00:04:41.925050 ignition[1364]: INFO : Stage: mount Nov 8 00:04:41.925050 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:41.925050 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:41.925050 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:41.934913 ignition[1364]: INFO : PUT result: OK Nov 8 00:04:41.943190 ignition[1364]: INFO : mount: mount passed Nov 8 00:04:41.946380 ignition[1364]: INFO : Ignition finished successfully Nov 8 00:04:41.946229 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:04:41.964520 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:04:41.983299 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:04:42.013177 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1375) Nov 8 00:04:42.018188 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:04:42.018273 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:04:42.018301 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:04:42.025170 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:04:42.028707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:04:42.068972 ignition[1392]: INFO : Ignition 2.19.0 Nov 8 00:04:42.068972 ignition[1392]: INFO : Stage: files Nov 8 00:04:42.073221 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:42.073221 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:42.073221 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:42.083845 ignition[1392]: INFO : PUT result: OK Nov 8 00:04:42.088538 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:04:42.100183 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:04:42.100183 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:04:42.156142 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:04:42.159467 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:04:42.159467 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:04:42.157957 unknown[1392]: wrote ssh authorized keys file for user: core Nov 8 00:04:42.172795 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:04:42.172795 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:04:42.172795 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:04:42.172795 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 8 00:04:42.261498 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:04:42.451371 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:04:42.451371 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:04:42.464311 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 8 00:04:42.940665 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:04:43.341033 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:04:43.341033 ignition[1392]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:43.349414 ignition[1392]: INFO : files: files passed Nov 8 00:04:43.421571 ignition[1392]: INFO : Ignition finished successfully Nov 8 00:04:43.364867 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:04:43.391839 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:04:43.400480 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:04:43.423395 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:04:43.424259 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:04:43.457361 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.457361 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.465861 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:43.472725 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:43.476321 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:04:43.497480 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:04:43.559190 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:04:43.560010 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:04:43.570078 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:04:43.572654 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:04:43.577756 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:04:43.588489 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:04:43.626792 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:43.639511 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:04:43.682204 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:04:43.682915 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:04:43.691375 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:43.692008 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:43.692891 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:04:43.693680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:04:43.693830 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:43.694842 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:04:43.695175 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:04:43.695520 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:04:43.695889 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:04:43.698583 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:04:43.700331 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:04:43.701583 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:04:43.702980 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:04:43.703597 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:04:43.705928 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:04:43.707155 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:04:43.707305 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:04:43.707940 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:43.708628 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:43.709001 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:04:43.730886 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:43.739804 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:04:43.740034 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:04:43.744709 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:04:43.744839 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:43.748444 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:04:43.748569 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:04:43.771793 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:04:43.820011 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:04:43.820854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:43.838434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:04:43.845194 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:04:43.845993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:43.853783 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:04:43.853929 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:04:43.871157 ignition[1445]: INFO : Ignition 2.19.0 Nov 8 00:04:43.875954 ignition[1445]: INFO : Stage: umount Nov 8 00:04:43.875954 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:43.875954 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:43.875954 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:43.875954 ignition[1445]: INFO : PUT result: OK Nov 8 00:04:43.890044 ignition[1445]: INFO : umount: umount passed Nov 8 00:04:43.890044 ignition[1445]: INFO : Ignition finished successfully Nov 8 00:04:43.897423 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:04:43.900521 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:04:43.905706 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:04:43.905852 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:04:43.918442 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:04:43.918592 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:04:43.927349 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:04:43.927477 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:04:43.932156 systemd[1]: Stopped target network.target - Network. Nov 8 00:04:43.938800 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:04:43.938952 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:04:43.943958 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:04:43.946207 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:04:43.946333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:43.950316 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:04:43.953262 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:04:43.960233 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:04:43.968543 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:04:43.976789 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:04:43.976891 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:04:43.979350 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:04:43.979467 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:04:43.981850 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:04:43.981958 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:04:43.986710 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:04:43.989431 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:04:43.995774 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:04:43.999012 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:04:43.999610 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:04:44.004286 systemd-networkd[1200]: eth0: DHCPv6 lease lost Nov 8 00:04:44.021058 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:04:44.022472 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:04:44.027941 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:04:44.033542 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:04:44.043260 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:04:44.045701 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:44.051252 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:04:44.051389 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:04:44.069491 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:04:44.074789 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:04:44.075876 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:04:44.083483 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:04:44.083605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:44.088720 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:04:44.088844 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:44.097866 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:04:44.097991 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:44.101278 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:44.131605 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:04:44.134618 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:44.142332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:04:44.142445 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:44.149353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:04:44.149446 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:44.152060 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:04:44.152325 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:04:44.157282 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:04:44.157407 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:04:44.161925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:04:44.162036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:44.183368 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:04:44.189053 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:04:44.191103 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:44.200285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:44.200808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:44.216079 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:04:44.217685 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:04:44.238729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:04:44.240262 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:04:44.244792 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:04:44.262422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:04:44.290200 systemd[1]: Switching root. Nov 8 00:04:44.334798 systemd-journald[250]: Journal stopped Nov 8 00:04:47.112586 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Nov 8 00:04:47.112732 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:04:47.112779 kernel: SELinux: policy capability open_perms=1 Nov 8 00:04:47.112814 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:04:47.112853 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:04:47.112884 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:04:47.112914 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:04:47.112945 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:04:47.112978 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:04:47.113009 kernel: audit: type=1403 audit(1762560285.096:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:04:47.113048 systemd[1]: Successfully loaded SELinux policy in 76.542ms. Nov 8 00:04:47.113093 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.187ms. Nov 8 00:04:47.115226 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:04:47.115296 systemd[1]: Detected virtualization amazon. Nov 8 00:04:47.115330 systemd[1]: Detected architecture arm64. Nov 8 00:04:47.115364 systemd[1]: Detected first boot. Nov 8 00:04:47.115398 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:04:47.115432 zram_generator::config[1504]: No configuration found. Nov 8 00:04:47.115469 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:04:47.115501 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:04:47.115533 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 8 00:04:47.115575 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:04:47.115610 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:04:47.115643 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:04:47.115676 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:04:47.115710 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:04:47.115744 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:04:47.115777 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:04:47.115810 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:04:47.115847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:47.115881 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:47.115913 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:04:47.115947 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:04:47.115988 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:04:47.116024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:04:47.116054 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:04:47.116087 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:47.116144 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:04:47.116188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:47.116227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:04:47.116263 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:04:47.116295 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:04:47.116327 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:04:47.116361 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:04:47.116410 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:04:47.116450 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:04:47.116481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:47.116531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:47.116561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:47.116592 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:04:47.116625 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:04:47.116656 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:04:47.116688 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:04:47.116718 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:04:47.116753 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:04:47.116783 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:04:47.116818 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:04:47.116851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:04:47.116881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:04:47.116914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:04:47.116946 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:04:47.116979 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:04:47.117009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:04:47.117041 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:04:47.117076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:04:47.117106 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:04:47.119250 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:04:47.119289 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:04:47.119323 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:04:47.119354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:04:47.119386 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:04:47.119417 kernel: loop: module loaded Nov 8 00:04:47.119448 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:04:47.119491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:04:47.119524 kernel: ACPI: bus type drm_connector registered Nov 8 00:04:47.119560 kernel: fuse: init (API version 7.39) Nov 8 00:04:47.119593 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:04:47.119626 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:04:47.119657 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:04:47.119688 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:04:47.119722 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:04:47.119755 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:04:47.119792 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:04:47.119823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:47.119855 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:04:47.119943 systemd-journald[1615]: Collecting audit messages is disabled. Nov 8 00:04:47.120010 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:04:47.120041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:04:47.120076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:04:47.120130 systemd-journald[1615]: Journal started Nov 8 00:04:47.120185 systemd-journald[1615]: Runtime Journal (/run/log/journal/ec248ab318588033392ba4d8bd334629) is 8.0M, max 75.3M, 67.3M free. Nov 8 00:04:47.127248 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:04:47.129761 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:04:47.130161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:04:47.137685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:04:47.138054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:04:47.144970 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:04:47.145389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:04:47.153580 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:04:47.154018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:04:47.163358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:47.170939 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:04:47.179540 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:04:47.211826 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:04:47.224437 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:04:47.238269 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:04:47.243660 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:04:47.262476 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:04:47.275620 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:04:47.280933 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:04:47.299577 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:04:47.307706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:04:47.320949 systemd-journald[1615]: Time spent on flushing to /var/log/journal/ec248ab318588033392ba4d8bd334629 is 59.743ms for 888 entries. Nov 8 00:04:47.320949 systemd-journald[1615]: System Journal (/var/log/journal/ec248ab318588033392ba4d8bd334629) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:04:47.394361 systemd-journald[1615]: Received client request to flush runtime journal. Nov 8 00:04:47.318442 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:04:47.336676 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:04:47.353811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:47.366751 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:04:47.378382 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:04:47.396817 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:04:47.406039 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:04:47.425598 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:04:47.443466 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:04:47.478774 udevadm[1667]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:04:47.539261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:47.558976 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Nov 8 00:04:47.559021 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Nov 8 00:04:47.573011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:04:47.593649 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:04:47.671919 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:04:47.693717 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:04:47.736445 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Nov 8 00:04:47.736485 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Nov 8 00:04:47.744853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:48.290394 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:04:48.302512 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:48.369685 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Nov 8 00:04:48.412370 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:48.425456 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:04:48.455432 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:04:48.588229 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:04:48.603956 (udev-worker)[1698]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:04:48.625898 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:04:48.768135 systemd-networkd[1688]: lo: Link UP Nov 8 00:04:48.768158 systemd-networkd[1688]: lo: Gained carrier Nov 8 00:04:48.772932 systemd-networkd[1688]: Enumeration completed Nov 8 00:04:48.773237 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:04:48.775819 systemd-networkd[1688]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:48.775828 systemd-networkd[1688]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:04:48.779446 systemd-networkd[1688]: eth0: Link UP Nov 8 00:04:48.779794 systemd-networkd[1688]: eth0: Gained carrier Nov 8 00:04:48.779826 systemd-networkd[1688]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:48.789965 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:04:48.815377 systemd-networkd[1688]: eth0: DHCPv4 address 172.31.28.187/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:04:48.822472 systemd-networkd[1688]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:48.896632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:48.964219 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1699) Nov 8 00:04:49.115034 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:49.189963 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:04:49.222839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:04:49.240410 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:04:49.275167 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:04:49.313868 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:04:49.317215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:49.327467 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:04:49.344066 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:04:49.386789 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:04:49.390455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:04:49.393737 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:04:49.394054 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:04:49.397354 systemd[1]: Reached target machines.target - Containers. Nov 8 00:04:49.401891 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:04:49.416368 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:04:49.424521 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:04:49.429894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:04:49.432447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:04:49.444419 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:04:49.466498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:04:49.476660 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:04:49.497524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:04:49.509145 kernel: loop0: detected capacity change from 0 to 114328 Nov 8 00:04:49.510860 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:04:49.514577 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:04:49.623171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:04:49.646151 kernel: loop1: detected capacity change from 0 to 52536 Nov 8 00:04:49.687157 kernel: loop2: detected capacity change from 0 to 207008 Nov 8 00:04:49.801145 kernel: loop3: detected capacity change from 0 to 114432 Nov 8 00:04:49.820352 systemd-networkd[1688]: eth0: Gained IPv6LL Nov 8 00:04:49.825586 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:04:49.931191 kernel: loop4: detected capacity change from 0 to 114328 Nov 8 00:04:49.947261 kernel: loop5: detected capacity change from 0 to 52536 Nov 8 00:04:49.966152 kernel: loop6: detected capacity change from 0 to 207008 Nov 8 00:04:49.998228 kernel: loop7: detected capacity change from 0 to 114432 Nov 8 00:04:50.007662 (sd-merge)[1840]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 8 00:04:50.008931 (sd-merge)[1840]: Merged extensions into '/usr'. Nov 8 00:04:50.017772 systemd[1]: Reloading requested from client PID 1825 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:04:50.017800 systemd[1]: Reloading... Nov 8 00:04:50.168152 zram_generator::config[1870]: No configuration found. Nov 8 00:04:50.479166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:04:50.648441 systemd[1]: Reloading finished in 629 ms. Nov 8 00:04:50.684762 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:04:50.700691 systemd[1]: Starting ensure-sysext.service... Nov 8 00:04:50.708527 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:04:50.726331 systemd[1]: Reloading requested from client PID 1925 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:04:50.726371 systemd[1]: Reloading... Nov 8 00:04:50.747043 ldconfig[1822]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:04:50.784232 systemd-tmpfiles[1926]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:04:50.785011 systemd-tmpfiles[1926]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:04:50.787292 systemd-tmpfiles[1926]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:04:50.787916 systemd-tmpfiles[1926]: ACLs are not supported, ignoring. Nov 8 00:04:50.788098 systemd-tmpfiles[1926]: ACLs are not supported, ignoring. Nov 8 00:04:50.798734 systemd-tmpfiles[1926]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:04:50.798759 systemd-tmpfiles[1926]: Skipping /boot Nov 8 00:04:50.837820 systemd-tmpfiles[1926]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:04:50.838586 systemd-tmpfiles[1926]: Skipping /boot Nov 8 00:04:50.897247 zram_generator::config[1955]: No configuration found. Nov 8 00:04:51.189901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:04:51.354925 systemd[1]: Reloading finished in 627 ms. Nov 8 00:04:51.385738 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:04:51.395271 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:51.416752 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:51.427465 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:04:51.440649 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:04:51.456477 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:04:51.473454 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:04:51.508001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:04:51.516140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:04:51.530660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:04:51.562962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:04:51.568246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:04:51.581360 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:04:51.617165 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:04:51.632031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:04:51.633971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:04:51.644732 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:04:51.655842 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:04:51.668095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:04:51.679232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:04:51.679709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:04:51.695158 augenrules[2048]: No rules Nov 8 00:04:51.713220 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:51.723948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:04:51.734691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:04:51.753868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:04:51.772610 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:04:51.784724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:04:51.794476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:04:51.794924 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:04:51.810828 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:04:51.828761 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:04:51.835746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:04:51.836375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:04:51.842567 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:04:51.842966 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:04:51.852911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:04:51.853611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:04:51.860269 systemd-resolved[2026]: Positive Trust Anchors: Nov 8 00:04:51.860304 systemd-resolved[2026]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:04:51.860368 systemd-resolved[2026]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:04:51.871668 systemd[1]: Finished ensure-sysext.service. Nov 8 00:04:51.877131 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:04:51.881541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:04:51.896368 systemd-resolved[2026]: Defaulting to hostname 'linux'. Nov 8 00:04:51.900044 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:04:51.900319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:04:51.900414 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:04:51.901457 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:04:51.906967 systemd[1]: Reached target network.target - Network. Nov 8 00:04:51.909293 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:04:51.912037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:51.915082 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:04:51.917819 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:04:51.920734 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:04:51.923802 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:04:51.926491 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:04:51.929213 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:04:51.932001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:04:51.932067 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:04:51.934995 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:04:51.938809 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:04:51.944791 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:04:51.950217 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:04:51.954321 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:04:51.956977 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:04:51.959351 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:04:51.961985 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:04:51.962081 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:04:51.962506 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:04:51.977433 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:04:51.986225 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:04:51.994472 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:04:52.008544 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:04:52.019910 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:04:52.023461 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:04:52.035202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:52.043862 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:04:52.063420 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:04:52.072484 jq[2084]: false Nov 8 00:04:52.092623 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:04:52.119392 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:04:52.131627 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:04:52.155279 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:04:52.169471 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:04:52.193457 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:04:52.210183 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:04:52.216233 extend-filesystems[2085]: Found loop4 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found loop5 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found loop6 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found loop7 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found nvme0n1 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found nvme0n1p1 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found nvme0n1p2 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found nvme0n1p3 Nov 8 00:04:52.216233 extend-filesystems[2085]: Found usr Nov 8 00:04:52.251443 extend-filesystems[2085]: Found nvme0n1p4 Nov 8 00:04:52.251443 extend-filesystems[2085]: Found nvme0n1p6 Nov 8 00:04:52.251443 extend-filesystems[2085]: Found nvme0n1p7 Nov 8 00:04:52.251443 extend-filesystems[2085]: Found nvme0n1p9 Nov 8 00:04:52.251443 extend-filesystems[2085]: Checking size of /dev/nvme0n1p9 Nov 8 00:04:52.236536 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:04:52.280953 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:04:52.290301 dbus-daemon[2083]: [system] SELinux support is enabled Nov 8 00:04:52.292971 dbus-daemon[2083]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1688 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:04:52.295254 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:04:46 UTC 2025 (1): Starting Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: ---------------------------------------------------- Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: corporation. Support and training for ntp-4 are Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: available at https://www.nwtime.org/support Nov 8 00:04:52.342533 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: ---------------------------------------------------- Nov 8 00:04:52.339983 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:04:46 UTC 2025 (1): Starting Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: proto: precision = 0.096 usec (-23) Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: basedate set to 2025-10-26 Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: gps base set to 2025-10-26 (week 2390) Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Listen normally on 3 eth0 172.31.28.187:123 Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Listen normally on 4 lo [::1]:123 Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Listen normally on 5 eth0 [fe80::4a4:9ff:fe39:6ad1%2]:123 Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:52.420349 ntpd[2088]: 8 Nov 00:04:52 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:52.384259 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:04:52.421079 extend-filesystems[2085]: Resized partition /dev/nvme0n1p9 Nov 8 00:04:52.340041 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:04:52.384832 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:04:52.478303 extend-filesystems[2131]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:04:52.520598 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 8 00:04:52.340062 ntpd[2088]: ---------------------------------------------------- Nov 8 00:04:52.394873 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:04:52.524627 jq[2113]: true Nov 8 00:04:52.340083 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:04:52.395479 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:04:52.340103 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:04:52.415329 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:04:52.340159 ntpd[2088]: corporation. Support and training for ntp-4 are Nov 8 00:04:52.415916 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:04:52.340181 ntpd[2088]: available at https://www.nwtime.org/support Nov 8 00:04:52.450025 (ntainerd)[2134]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:04:52.340200 ntpd[2088]: ---------------------------------------------------- Nov 8 00:04:52.508930 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:04:52.350188 ntpd[2088]: proto: precision = 0.096 usec (-23) Nov 8 00:04:52.557810 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:04:52.351194 ntpd[2088]: basedate set to 2025-10-26 Nov 8 00:04:52.557880 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:04:52.351231 ntpd[2088]: gps base set to 2025-10-26 (week 2390) Nov 8 00:04:52.561007 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:04:52.355979 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:04:52.561049 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:04:52.356072 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:04:52.356479 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:04:52.356567 ntpd[2088]: Listen normally on 3 eth0 172.31.28.187:123 Nov 8 00:04:52.356639 ntpd[2088]: Listen normally on 4 lo [::1]:123 Nov 8 00:04:52.356721 ntpd[2088]: Listen normally on 5 eth0 [fe80::4a4:9ff:fe39:6ad1%2]:123 Nov 8 00:04:52.356801 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Nov 8 00:04:52.382372 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:52.382437 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:52.569950 dbus-daemon[2083]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:04:52.613469 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:04:52.629996 jq[2141]: true Nov 8 00:04:52.683788 tar[2129]: linux-arm64/LICENSE Nov 8 00:04:52.698456 tar[2129]: linux-arm64/helm Nov 8 00:04:52.724913 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:04:52.733996 coreos-metadata[2081]: Nov 08 00:04:52.733 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:04:52.754787 update_engine[2111]: I20251108 00:04:52.754350 2111 main.cc:92] Flatcar Update Engine starting Nov 8 00:04:52.756133 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 8 00:04:52.760506 coreos-metadata[2081]: Nov 08 00:04:52.758 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 8 00:04:52.772939 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:04:52.776321 update_engine[2111]: I20251108 00:04:52.773036 2111 update_check_scheduler.cc:74] Next update check in 10m21s Nov 8 00:04:52.776491 coreos-metadata[2081]: Nov 08 00:04:52.776 INFO Fetch successful Nov 8 00:04:52.776562 coreos-metadata[2081]: Nov 08 00:04:52.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 8 00:04:52.778486 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:04:52.780691 coreos-metadata[2081]: Nov 08 00:04:52.780 INFO Fetch successful Nov 8 00:04:52.780691 coreos-metadata[2081]: Nov 08 00:04:52.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 8 00:04:52.786385 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:04:52.796829 coreos-metadata[2081]: Nov 08 00:04:52.796 INFO Fetch successful Nov 8 00:04:52.802172 coreos-metadata[2081]: Nov 08 00:04:52.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 8 00:04:52.815303 coreos-metadata[2081]: Nov 08 00:04:52.815 INFO Fetch successful Nov 8 00:04:52.815303 coreos-metadata[2081]: Nov 08 00:04:52.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 8 00:04:52.819410 coreos-metadata[2081]: Nov 08 00:04:52.819 INFO Fetch failed with 404: resource not found Nov 8 00:04:52.819410 coreos-metadata[2081]: Nov 08 00:04:52.819 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 8 00:04:52.829263 coreos-metadata[2081]: Nov 08 00:04:52.829 INFO Fetch successful Nov 8 00:04:52.829263 coreos-metadata[2081]: Nov 08 00:04:52.829 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 8 00:04:52.832929 coreos-metadata[2081]: Nov 08 00:04:52.831 INFO Fetch successful Nov 8 00:04:52.832929 coreos-metadata[2081]: Nov 08 00:04:52.831 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 8 00:04:52.838213 coreos-metadata[2081]: Nov 08 00:04:52.838 INFO Fetch successful Nov 8 00:04:52.838213 coreos-metadata[2081]: Nov 08 00:04:52.838 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 8 00:04:52.844156 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 8 00:04:52.851819 coreos-metadata[2081]: Nov 08 00:04:52.851 INFO Fetch successful Nov 8 00:04:52.851819 coreos-metadata[2081]: Nov 08 00:04:52.851 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 8 00:04:52.856216 coreos-metadata[2081]: Nov 08 00:04:52.856 INFO Fetch successful Nov 8 00:04:52.872350 extend-filesystems[2131]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 8 00:04:52.872350 extend-filesystems[2131]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:04:52.872350 extend-filesystems[2131]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 8 00:04:52.897357 extend-filesystems[2085]: Resized filesystem in /dev/nvme0n1p9 Nov 8 00:04:52.882669 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:04:52.886550 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:04:52.957015 systemd-logind[2107]: Watching system buttons on /dev/input/event0 (Power Button) Nov 8 00:04:52.957075 systemd-logind[2107]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 8 00:04:52.972349 systemd-logind[2107]: New seat seat0. Nov 8 00:04:52.986307 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:04:53.005130 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:04:53.009943 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:04:53.079513 bash[2211]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:04:53.093035 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:04:53.099168 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2175) Nov 8 00:04:53.112730 systemd[1]: Starting sshkeys.service... Nov 8 00:04:53.219102 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:04:53.275652 amazon-ssm-agent[2168]: Initializing new seelog logger Nov 8 00:04:53.286022 amazon-ssm-agent[2168]: New Seelog Logger Creation Complete Nov 8 00:04:53.286022 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.286022 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.286022 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 processing appconfig overrides Nov 8 00:04:53.286437 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:04:53.301090 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.301090 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.301090 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 processing appconfig overrides Nov 8 00:04:53.301090 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.301090 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.301090 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 processing appconfig overrides Nov 8 00:04:53.301090 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO Proxy environment variables: Nov 8 00:04:53.320753 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.320753 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:53.320985 amazon-ssm-agent[2168]: 2025/11/08 00:04:53 processing appconfig overrides Nov 8 00:04:53.418242 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO no_proxy: Nov 8 00:04:53.437253 locksmithd[2169]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:04:53.527223 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO https_proxy: Nov 8 00:04:53.601794 dbus-daemon[2083]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:04:53.602052 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:04:53.606144 dbus-daemon[2083]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2153 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:04:53.616982 coreos-metadata[2227]: Nov 08 00:04:53.615 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:04:53.714223 coreos-metadata[2227]: Nov 08 00:04:53.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 8 00:04:53.714223 coreos-metadata[2227]: Nov 08 00:04:53.629 INFO Fetch successful Nov 8 00:04:53.714223 coreos-metadata[2227]: Nov 08 00:04:53.630 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:04:53.714223 coreos-metadata[2227]: Nov 08 00:04:53.631 INFO Fetch successful Nov 8 00:04:53.635387 unknown[2227]: wrote ssh authorized keys file for user: core Nov 8 00:04:53.714970 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO http_proxy: Nov 8 00:04:53.705977 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:04:53.732260 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO Checking if agent identity type OnPrem can be assumed Nov 8 00:04:53.814252 update-ssh-keys[2304]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:04:53.801059 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:04:53.824415 systemd[1]: Finished sshkeys.service. Nov 8 00:04:53.827189 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO Checking if agent identity type EC2 can be assumed Nov 8 00:04:53.867922 polkitd[2293]: Started polkitd version 121 Nov 8 00:04:53.911145 containerd[2134]: time="2025-11-08T00:04:53.907190967Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:04:53.927438 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO Agent will take identity from EC2 Nov 8 00:04:53.976325 polkitd[2293]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:04:53.985542 polkitd[2293]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:04:53.997373 polkitd[2293]: Finished loading, compiling and executing 2 rules Nov 8 00:04:54.007774 dbus-daemon[2083]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:04:54.012444 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:04:54.018438 polkitd[2293]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:04:54.027650 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:04:54.082207 systemd-hostnamed[2153]: Hostname set to (transient) Nov 8 00:04:54.082374 systemd-resolved[2026]: System hostname changed to 'ip-172-31-28-187'. Nov 8 00:04:54.133349 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:04:54.158784 containerd[2134]: time="2025-11-08T00:04:54.158660712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:54.168135 containerd[2134]: time="2025-11-08T00:04:54.167514180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:54.168135 containerd[2134]: time="2025-11-08T00:04:54.167610564Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:04:54.168135 containerd[2134]: time="2025-11-08T00:04:54.167679708Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:04:54.170271 containerd[2134]: time="2025-11-08T00:04:54.170179860Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:04:54.170271 containerd[2134]: time="2025-11-08T00:04:54.170267220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:54.171495 containerd[2134]: time="2025-11-08T00:04:54.170519412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:54.171495 containerd[2134]: time="2025-11-08T00:04:54.170566872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:54.174721 containerd[2134]: time="2025-11-08T00:04:54.173536116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:54.174721 containerd[2134]: time="2025-11-08T00:04:54.173618220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:54.174721 containerd[2134]: time="2025-11-08T00:04:54.173687220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:54.174721 containerd[2134]: time="2025-11-08T00:04:54.173715900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:54.174721 containerd[2134]: time="2025-11-08T00:04:54.174030972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:54.174721 containerd[2134]: time="2025-11-08T00:04:54.174667872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:54.178172 containerd[2134]: time="2025-11-08T00:04:54.177632100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:54.178172 containerd[2134]: time="2025-11-08T00:04:54.177713976Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:04:54.178172 containerd[2134]: time="2025-11-08T00:04:54.178068132Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:04:54.178434 containerd[2134]: time="2025-11-08T00:04:54.178340388Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:04:54.186881 containerd[2134]: time="2025-11-08T00:04:54.186809928Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:04:54.187014 containerd[2134]: time="2025-11-08T00:04:54.186913536Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:04:54.187014 containerd[2134]: time="2025-11-08T00:04:54.186951936Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:04:54.187014 containerd[2134]: time="2025-11-08T00:04:54.187001772Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:04:54.187209 containerd[2134]: time="2025-11-08T00:04:54.187036548Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.187351488Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.187911168Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188184240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188222976Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188256120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188288676Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188319024Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188350080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188400396Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188436816Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188467116Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188506884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188540172Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:04:54.189526 containerd[2134]: time="2025-11-08T00:04:54.188583348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188617224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188646756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188677704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188709516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188741220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188780352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188811996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188844360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188880012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188910756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188940468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.188972328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.189012480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.189057480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.190205 containerd[2134]: time="2025-11-08T00:04:54.189089508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.198010 containerd[2134]: time="2025-11-08T00:04:54.196448268Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:04:54.198010 containerd[2134]: time="2025-11-08T00:04:54.196707744Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:04:54.198010 containerd[2134]: time="2025-11-08T00:04:54.196752408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:04:54.198010 containerd[2134]: time="2025-11-08T00:04:54.196779984Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:04:54.198010 containerd[2134]: time="2025-11-08T00:04:54.196809828Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:04:54.198010 containerd[2134]: time="2025-11-08T00:04:54.196834884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.198010 containerd[2134]: time="2025-11-08T00:04:54.196865016Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:04:54.202630 containerd[2134]: time="2025-11-08T00:04:54.197207232Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:04:54.202630 containerd[2134]: time="2025-11-08T00:04:54.202213836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:04:54.203094 containerd[2134]: time="2025-11-08T00:04:54.202949556Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:04:54.210148 containerd[2134]: time="2025-11-08T00:04:54.203104920Z" level=info msg="Connect containerd service" Nov 8 00:04:54.210148 containerd[2134]: time="2025-11-08T00:04:54.208310628Z" level=info msg="using legacy CRI server" Nov 8 00:04:54.210148 containerd[2134]: time="2025-11-08T00:04:54.208335180Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:04:54.210148 containerd[2134]: time="2025-11-08T00:04:54.208538244Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:04:54.210148 containerd[2134]: time="2025-11-08T00:04:54.209856372Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:04:54.220068 containerd[2134]: time="2025-11-08T00:04:54.219997992Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:04:54.221367 containerd[2134]: time="2025-11-08T00:04:54.221312676Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:04:54.223951 containerd[2134]: time="2025-11-08T00:04:54.222375636Z" level=info msg="Start subscribing containerd event" Nov 8 00:04:54.223951 containerd[2134]: time="2025-11-08T00:04:54.222642168Z" level=info msg="Start recovering state" Nov 8 00:04:54.223951 containerd[2134]: time="2025-11-08T00:04:54.222823764Z" level=info msg="Start event monitor" Nov 8 00:04:54.223951 containerd[2134]: time="2025-11-08T00:04:54.222858828Z" level=info msg="Start snapshots syncer" Nov 8 00:04:54.223951 containerd[2134]: time="2025-11-08T00:04:54.222882036Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:04:54.223951 containerd[2134]: time="2025-11-08T00:04:54.222901368Z" level=info msg="Start streaming server" Nov 8 00:04:54.224675 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:04:54.228457 containerd[2134]: time="2025-11-08T00:04:54.228047317Z" level=info msg="containerd successfully booted in 0.341117s" Nov 8 00:04:54.231645 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:04:54.331649 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 8 00:04:54.434383 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 8 00:04:54.487341 sshd_keygen[2126]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:04:54.536158 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [amazon-ssm-agent] Starting Core Agent Nov 8 00:04:54.580986 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:04:54.603705 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:04:54.634224 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 8 00:04:54.648034 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:04:54.648640 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:04:54.666640 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:04:54.720025 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:04:54.738384 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [Registrar] Starting registrar module Nov 8 00:04:54.739356 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:04:54.754725 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:04:54.757846 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:04:54.840222 amazon-ssm-agent[2168]: 2025-11-08 00:04:53 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 8 00:04:54.895502 tar[2129]: linux-arm64/README.md Nov 8 00:04:54.928686 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:04:55.225937 amazon-ssm-agent[2168]: 2025-11-08 00:04:55 INFO [EC2Identity] EC2 registration was successful. Nov 8 00:04:55.257433 amazon-ssm-agent[2168]: 2025-11-08 00:04:55 INFO [CredentialRefresher] credentialRefresher has started Nov 8 00:04:55.257571 amazon-ssm-agent[2168]: 2025-11-08 00:04:55 INFO [CredentialRefresher] Starting credentials refresher loop Nov 8 00:04:55.257571 amazon-ssm-agent[2168]: 2025-11-08 00:04:55 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 8 00:04:55.326756 amazon-ssm-agent[2168]: 2025-11-08 00:04:55 INFO [CredentialRefresher] Next credential rotation will be in 31.033324435733334 minutes Nov 8 00:04:55.839542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:55.845051 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:04:55.850574 systemd[1]: Startup finished in 10.528s (kernel) + 10.829s (userspace) = 21.357s. Nov 8 00:04:55.859879 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:55.924679 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:04:55.931620 systemd[1]: Started sshd@0-172.31.28.187:22-139.178.89.65:33062.service - OpenSSH per-connection server daemon (139.178.89.65:33062). Nov 8 00:04:56.152351 sshd[2381]: Accepted publickey for core from 139.178.89.65 port 33062 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:56.155213 sshd[2381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:56.176904 systemd-logind[2107]: New session 1 of user core. Nov 8 00:04:56.179502 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:04:56.186608 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:04:56.231170 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:04:56.239555 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:04:56.263667 (systemd)[2391]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:04:56.315039 amazon-ssm-agent[2168]: 2025-11-08 00:04:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 8 00:04:56.414518 amazon-ssm-agent[2168]: 2025-11-08 00:04:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2396) started Nov 8 00:04:56.518141 amazon-ssm-agent[2168]: 2025-11-08 00:04:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 8 00:04:56.527233 systemd[2391]: Queued start job for default target default.target. Nov 8 00:04:56.527938 systemd[2391]: Created slice app.slice - User Application Slice. Nov 8 00:04:56.527995 systemd[2391]: Reached target paths.target - Paths. Nov 8 00:04:56.528028 systemd[2391]: Reached target timers.target - Timers. Nov 8 00:04:56.534521 systemd[2391]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:04:56.566450 systemd[2391]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:04:56.568959 systemd[2391]: Reached target sockets.target - Sockets. Nov 8 00:04:56.569014 systemd[2391]: Reached target basic.target - Basic System. Nov 8 00:04:56.569138 systemd[2391]: Reached target default.target - Main User Target. Nov 8 00:04:56.569203 systemd[2391]: Startup finished in 289ms. Nov 8 00:04:56.569649 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:04:56.586450 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:04:56.748608 systemd[1]: Started sshd@1-172.31.28.187:22-139.178.89.65:33076.service - OpenSSH per-connection server daemon (139.178.89.65:33076). Nov 8 00:04:56.944197 sshd[2414]: Accepted publickey for core from 139.178.89.65 port 33076 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:56.947587 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:56.957506 systemd-logind[2107]: New session 2 of user core. Nov 8 00:04:56.965801 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:04:57.102413 sshd[2414]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:57.109998 systemd[1]: sshd@1-172.31.28.187:22-139.178.89.65:33076.service: Deactivated successfully. Nov 8 00:04:57.120186 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:04:57.123385 systemd-logind[2107]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:04:57.136663 systemd[1]: Started sshd@2-172.31.28.187:22-139.178.89.65:33090.service - OpenSSH per-connection server daemon (139.178.89.65:33090). Nov 8 00:04:57.137478 systemd-logind[2107]: Removed session 2. Nov 8 00:04:57.216923 kubelet[2376]: E1108 00:04:57.216794 2376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:57.221479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:57.221886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:57.315751 sshd[2423]: Accepted publickey for core from 139.178.89.65 port 33090 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:57.318365 sshd[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:57.327561 systemd-logind[2107]: New session 3 of user core. Nov 8 00:04:57.339611 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:04:57.461536 sshd[2423]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:57.469543 systemd[1]: sshd@2-172.31.28.187:22-139.178.89.65:33090.service: Deactivated successfully. Nov 8 00:04:57.469796 systemd-logind[2107]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:04:57.475807 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:04:57.476905 systemd-logind[2107]: Removed session 3. Nov 8 00:04:57.495629 systemd[1]: Started sshd@3-172.31.28.187:22-139.178.89.65:33098.service - OpenSSH per-connection server daemon (139.178.89.65:33098). Nov 8 00:04:57.675736 sshd[2433]: Accepted publickey for core from 139.178.89.65 port 33098 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:57.678461 sshd[2433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:57.687309 systemd-logind[2107]: New session 4 of user core. Nov 8 00:04:57.699616 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:04:57.832663 sshd[2433]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:57.838499 systemd[1]: sshd@3-172.31.28.187:22-139.178.89.65:33098.service: Deactivated successfully. Nov 8 00:04:57.840220 systemd-logind[2107]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:04:57.844311 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:04:57.847385 systemd-logind[2107]: Removed session 4. Nov 8 00:04:57.861622 systemd[1]: Started sshd@4-172.31.28.187:22-139.178.89.65:33108.service - OpenSSH per-connection server daemon (139.178.89.65:33108). Nov 8 00:04:58.044864 sshd[2441]: Accepted publickey for core from 139.178.89.65 port 33108 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:58.047550 sshd[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:58.055262 systemd-logind[2107]: New session 5 of user core. Nov 8 00:04:58.064730 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:04:58.208049 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:04:58.208786 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:58.227006 sudo[2445]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:58.252562 sshd[2441]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:58.258920 systemd[1]: sshd@4-172.31.28.187:22-139.178.89.65:33108.service: Deactivated successfully. Nov 8 00:04:58.264962 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:04:58.266388 systemd-logind[2107]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:04:58.268929 systemd-logind[2107]: Removed session 5. Nov 8 00:04:58.291794 systemd[1]: Started sshd@5-172.31.28.187:22-139.178.89.65:33116.service - OpenSSH per-connection server daemon (139.178.89.65:33116). Nov 8 00:04:58.465650 sshd[2450]: Accepted publickey for core from 139.178.89.65 port 33116 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:58.468779 sshd[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:58.478169 systemd-logind[2107]: New session 6 of user core. Nov 8 00:04:58.482179 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:04:58.590285 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:04:58.590921 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:58.596914 sudo[2455]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:58.607019 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:04:58.607717 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:58.634576 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:58.637938 auditctl[2458]: No rules Nov 8 00:04:58.641197 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:04:58.641730 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:58.650753 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:58.703332 augenrules[2477]: No rules Nov 8 00:04:58.706845 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:58.711682 sudo[2454]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:58.735691 sshd[2450]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:58.742684 systemd[1]: sshd@5-172.31.28.187:22-139.178.89.65:33116.service: Deactivated successfully. Nov 8 00:04:58.748905 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:04:58.749209 systemd-logind[2107]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:04:58.751762 systemd-logind[2107]: Removed session 6. Nov 8 00:04:58.765635 systemd[1]: Started sshd@6-172.31.28.187:22-139.178.89.65:33132.service - OpenSSH per-connection server daemon (139.178.89.65:33132). Nov 8 00:04:58.942028 sshd[2486]: Accepted publickey for core from 139.178.89.65 port 33132 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:58.944577 sshd[2486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:58.952980 systemd-logind[2107]: New session 7 of user core. Nov 8 00:04:58.962743 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:04:59.069241 sudo[2490]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:04:59.070536 sudo[2490]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:59.840619 systemd-resolved[2026]: Clock change detected. Flushing caches. Nov 8 00:05:00.192327 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:05:00.210243 (dockerd)[2506]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:05:00.778621 dockerd[2506]: time="2025-11-08T00:05:00.778481540Z" level=info msg="Starting up" Nov 8 00:05:01.668657 systemd[1]: var-lib-docker-metacopy\x2dcheck907506938-merged.mount: Deactivated successfully. Nov 8 00:05:01.679474 dockerd[2506]: time="2025-11-08T00:05:01.679382841Z" level=info msg="Loading containers: start." Nov 8 00:05:01.929603 kernel: Initializing XFRM netlink socket Nov 8 00:05:01.995427 (udev-worker)[2529]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:05:02.093767 systemd-networkd[1688]: docker0: Link UP Nov 8 00:05:02.117596 dockerd[2506]: time="2025-11-08T00:05:02.117135847Z" level=info msg="Loading containers: done." Nov 8 00:05:02.155399 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3469502083-merged.mount: Deactivated successfully. Nov 8 00:05:02.164255 dockerd[2506]: time="2025-11-08T00:05:02.163503067Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:05:02.164255 dockerd[2506]: time="2025-11-08T00:05:02.163676479Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:05:02.164255 dockerd[2506]: time="2025-11-08T00:05:02.163868047Z" level=info msg="Daemon has completed initialization" Nov 8 00:05:02.215322 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:05:02.216128 dockerd[2506]: time="2025-11-08T00:05:02.215160787Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:05:03.605809 containerd[2134]: time="2025-11-08T00:05:03.605308798Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:05:04.277789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2306558514.mount: Deactivated successfully. Nov 8 00:05:05.780032 containerd[2134]: time="2025-11-08T00:05:05.779674909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:05.781941 containerd[2134]: time="2025-11-08T00:05:05.781862029Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Nov 8 00:05:05.782858 containerd[2134]: time="2025-11-08T00:05:05.782776381Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:05.788926 containerd[2134]: time="2025-11-08T00:05:05.788857249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:05.792013 containerd[2134]: time="2025-11-08T00:05:05.791606917Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.186237879s" Nov 8 00:05:05.792013 containerd[2134]: time="2025-11-08T00:05:05.791670409Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 8 00:05:05.793925 containerd[2134]: time="2025-11-08T00:05:05.792960757Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:05:07.429702 containerd[2134]: time="2025-11-08T00:05:07.429637537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:07.432247 containerd[2134]: time="2025-11-08T00:05:07.432183265Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Nov 8 00:05:07.433301 containerd[2134]: time="2025-11-08T00:05:07.433225393Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:07.439811 containerd[2134]: time="2025-11-08T00:05:07.439694077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:07.443061 containerd[2134]: time="2025-11-08T00:05:07.442207513Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.649181476s" Nov 8 00:05:07.443061 containerd[2134]: time="2025-11-08T00:05:07.442636765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 8 00:05:07.443658 containerd[2134]: time="2025-11-08T00:05:07.443608045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:05:07.971124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:05:07.984014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:08.368138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:08.378224 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:05:08.480649 kubelet[2724]: E1108 00:05:08.480525 2724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:05:08.491658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:05:08.494504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:05:08.893625 containerd[2134]: time="2025-11-08T00:05:08.893523028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:08.895821 containerd[2134]: time="2025-11-08T00:05:08.895763488Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Nov 8 00:05:08.897806 containerd[2134]: time="2025-11-08T00:05:08.897752344Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:08.905235 containerd[2134]: time="2025-11-08T00:05:08.904389281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:08.906850 containerd[2134]: time="2025-11-08T00:05:08.906771689Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.463093048s" Nov 8 00:05:08.906961 containerd[2134]: time="2025-11-08T00:05:08.906848165Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 8 00:05:08.907519 containerd[2134]: time="2025-11-08T00:05:08.907378097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:05:10.162374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967500401.mount: Deactivated successfully. Nov 8 00:05:10.727239 containerd[2134]: time="2025-11-08T00:05:10.727144026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:10.729872 containerd[2134]: time="2025-11-08T00:05:10.729453942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Nov 8 00:05:10.730933 containerd[2134]: time="2025-11-08T00:05:10.730860126Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:10.735755 containerd[2134]: time="2025-11-08T00:05:10.735688206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:10.737450 containerd[2134]: time="2025-11-08T00:05:10.737395170Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.829944053s" Nov 8 00:05:10.737824 containerd[2134]: time="2025-11-08T00:05:10.737652978Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 8 00:05:10.739197 containerd[2134]: time="2025-11-08T00:05:10.739141026Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:05:11.358324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384250723.mount: Deactivated successfully. Nov 8 00:05:12.702479 containerd[2134]: time="2025-11-08T00:05:12.702417967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:12.715298 containerd[2134]: time="2025-11-08T00:05:12.715231495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Nov 8 00:05:12.722592 containerd[2134]: time="2025-11-08T00:05:12.721772443Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:12.730852 containerd[2134]: time="2025-11-08T00:05:12.730737956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:12.734822 containerd[2134]: time="2025-11-08T00:05:12.733647344Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.994439346s" Nov 8 00:05:12.734822 containerd[2134]: time="2025-11-08T00:05:12.733715636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 8 00:05:12.735400 containerd[2134]: time="2025-11-08T00:05:12.735351680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:05:13.812864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847222093.mount: Deactivated successfully. Nov 8 00:05:13.822314 containerd[2134]: time="2025-11-08T00:05:13.820727217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:13.822314 containerd[2134]: time="2025-11-08T00:05:13.822258129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 8 00:05:13.823222 containerd[2134]: time="2025-11-08T00:05:13.823177461Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:13.828470 containerd[2134]: time="2025-11-08T00:05:13.828412605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:13.829735 containerd[2134]: time="2025-11-08T00:05:13.829666509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.094147477s" Nov 8 00:05:13.829735 containerd[2134]: time="2025-11-08T00:05:13.829726377Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 8 00:05:13.830781 containerd[2134]: time="2025-11-08T00:05:13.830631933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:05:14.406325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637322950.mount: Deactivated successfully. Nov 8 00:05:16.859654 containerd[2134]: time="2025-11-08T00:05:16.858876780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:16.861465 containerd[2134]: time="2025-11-08T00:05:16.861395388Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Nov 8 00:05:16.864048 containerd[2134]: time="2025-11-08T00:05:16.863976948Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:16.870595 containerd[2134]: time="2025-11-08T00:05:16.870472092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:16.873058 containerd[2134]: time="2025-11-08T00:05:16.873006336Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.042277923s" Nov 8 00:05:16.873321 containerd[2134]: time="2025-11-08T00:05:16.873181032Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 8 00:05:18.742180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:05:18.751050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:19.095818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:19.114149 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:05:19.202103 kubelet[2882]: E1108 00:05:19.202044 2882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:05:19.207043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:05:19.207440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:05:24.596080 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:05:25.164711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:25.176057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:25.223124 systemd[1]: Reloading requested from client PID 2902 ('systemctl') (unit session-7.scope)... Nov 8 00:05:25.223148 systemd[1]: Reloading... Nov 8 00:05:25.427615 zram_generator::config[2945]: No configuration found. Nov 8 00:05:25.695080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:05:25.866676 systemd[1]: Reloading finished in 642 ms. Nov 8 00:05:25.955037 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:05:25.955902 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:05:25.956953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:25.972291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:26.284897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:26.303277 (kubelet)[3017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:05:26.379919 kubelet[3017]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:05:26.380504 kubelet[3017]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:05:26.380622 kubelet[3017]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:05:26.380872 kubelet[3017]: I1108 00:05:26.380829 3017 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:05:28.503588 kubelet[3017]: I1108 00:05:28.502976 3017 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:05:28.503588 kubelet[3017]: I1108 00:05:28.503028 3017 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:05:28.503588 kubelet[3017]: I1108 00:05:28.503480 3017 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:05:28.551446 kubelet[3017]: E1108 00:05:28.551382 3017 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.187:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:28.554981 kubelet[3017]: I1108 00:05:28.554939 3017 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:05:28.570462 kubelet[3017]: E1108 00:05:28.570390 3017 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:05:28.570462 kubelet[3017]: I1108 00:05:28.570461 3017 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:05:28.576599 kubelet[3017]: I1108 00:05:28.575935 3017 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:05:28.576909 kubelet[3017]: I1108 00:05:28.576861 3017 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:05:28.577297 kubelet[3017]: I1108 00:05:28.577009 3017 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-187","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:05:28.577702 kubelet[3017]: I1108 00:05:28.577681 3017 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:05:28.577811 kubelet[3017]: I1108 00:05:28.577794 3017 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:05:28.578238 kubelet[3017]: I1108 00:05:28.578217 3017 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:05:28.584299 kubelet[3017]: I1108 00:05:28.584263 3017 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:05:28.584480 kubelet[3017]: I1108 00:05:28.584460 3017 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:05:28.584624 kubelet[3017]: I1108 00:05:28.584600 3017 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:05:28.584733 kubelet[3017]: I1108 00:05:28.584715 3017 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:05:28.590200 kubelet[3017]: W1108 00:05:28.590108 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-187&limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:28.590323 kubelet[3017]: E1108 00:05:28.590212 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-187&limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:28.593147 kubelet[3017]: W1108 00:05:28.593059 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:28.593310 kubelet[3017]: E1108 00:05:28.593159 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:28.593310 kubelet[3017]: I1108 00:05:28.593301 3017 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:05:28.595767 kubelet[3017]: I1108 00:05:28.595241 3017 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:05:28.595767 kubelet[3017]: W1108 00:05:28.595475 3017 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:05:28.599035 kubelet[3017]: I1108 00:05:28.598937 3017 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:05:28.599035 kubelet[3017]: I1108 00:05:28.599000 3017 server.go:1287] "Started kubelet" Nov 8 00:05:28.602258 kubelet[3017]: I1108 00:05:28.602195 3017 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:05:28.608255 kubelet[3017]: I1108 00:05:28.608205 3017 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:05:28.611617 kubelet[3017]: I1108 00:05:28.609974 3017 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:05:28.611879 kubelet[3017]: I1108 00:05:28.611807 3017 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:05:28.612324 kubelet[3017]: I1108 00:05:28.612294 3017 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:05:28.614752 kubelet[3017]: I1108 00:05:28.614690 3017 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:05:28.616224 kubelet[3017]: E1108 00:05:28.615256 3017 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-187\" not found" Nov 8 00:05:28.616224 kubelet[3017]: I1108 00:05:28.615868 3017 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:05:28.616224 kubelet[3017]: I1108 00:05:28.615955 3017 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:05:28.618363 kubelet[3017]: I1108 00:05:28.618319 3017 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:05:28.622628 kubelet[3017]: E1108 00:05:28.622154 3017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.187:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.187:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-187.1875df47c956691e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-187,UID:ip-172-31-28-187,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-187,},FirstTimestamp:2025-11-08 00:05:28.598972702 +0000 UTC m=+2.288666352,LastTimestamp:2025-11-08 00:05:28.598972702 +0000 UTC m=+2.288666352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-187,}" Nov 8 00:05:28.623362 kubelet[3017]: W1108 00:05:28.623298 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:28.623539 kubelet[3017]: E1108 00:05:28.623510 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:28.623796 kubelet[3017]: E1108 00:05:28.623759 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-187?timeout=10s\": dial tcp 172.31.28.187:6443: connect: connection refused" interval="200ms" Nov 8 00:05:28.624201 kubelet[3017]: I1108 00:05:28.624176 3017 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:05:28.624885 kubelet[3017]: I1108 00:05:28.624847 3017 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:05:28.626441 kubelet[3017]: E1108 00:05:28.626404 3017 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:05:28.627041 kubelet[3017]: I1108 00:05:28.627009 3017 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:05:28.640040 kubelet[3017]: I1108 00:05:28.639961 3017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:05:28.642610 kubelet[3017]: I1108 00:05:28.642497 3017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:05:28.642610 kubelet[3017]: I1108 00:05:28.642552 3017 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:05:28.642610 kubelet[3017]: I1108 00:05:28.642613 3017 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:05:28.642873 kubelet[3017]: I1108 00:05:28.642628 3017 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:05:28.642873 kubelet[3017]: E1108 00:05:28.642701 3017 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:05:28.681584 kubelet[3017]: W1108 00:05:28.680802 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:28.681584 kubelet[3017]: E1108 00:05:28.680895 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:28.695184 kubelet[3017]: I1108 00:05:28.695151 3017 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:05:28.695540 kubelet[3017]: I1108 00:05:28.695467 3017 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:05:28.695740 kubelet[3017]: I1108 00:05:28.695723 3017 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:05:28.700539 kubelet[3017]: I1108 00:05:28.700512 3017 policy_none.go:49] "None policy: Start" Nov 8 00:05:28.700696 kubelet[3017]: I1108 00:05:28.700677 3017 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:05:28.700795 kubelet[3017]: I1108 00:05:28.700778 3017 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:05:28.714913 kubelet[3017]: I1108 00:05:28.714874 3017 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:05:28.715518 kubelet[3017]: I1108 00:05:28.715451 3017 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:05:28.715725 kubelet[3017]: I1108 00:05:28.715477 3017 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:05:28.718354 kubelet[3017]: I1108 00:05:28.718145 3017 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:05:28.718476 kubelet[3017]: E1108 00:05:28.718447 3017 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:05:28.718530 kubelet[3017]: E1108 00:05:28.718497 3017 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-187\" not found" Nov 8 00:05:28.760615 kubelet[3017]: E1108 00:05:28.759278 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:28.760615 kubelet[3017]: E1108 00:05:28.760004 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:28.765738 kubelet[3017]: E1108 00:05:28.765644 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:28.816843 kubelet[3017]: I1108 00:05:28.816800 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54dfacb1860fb1b075746674f9385909-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-187\" (UID: \"54dfacb1860fb1b075746674f9385909\") " pod="kube-system/kube-scheduler-ip-172-31-28-187" Nov 8 00:05:28.817021 kubelet[3017]: I1108 00:05:28.816993 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e012f342be822a9cf510d94bb2d9ea4-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-187\" (UID: \"3e012f342be822a9cf510d94bb2d9ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:28.817175 kubelet[3017]: I1108 00:05:28.817147 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:28.817343 kubelet[3017]: I1108 00:05:28.817317 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:28.817473 kubelet[3017]: I1108 00:05:28.817450 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:28.817620 kubelet[3017]: I1108 00:05:28.817594 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:28.817772 kubelet[3017]: I1108 00:05:28.817747 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e012f342be822a9cf510d94bb2d9ea4-ca-certs\") pod \"kube-apiserver-ip-172-31-28-187\" (UID: \"3e012f342be822a9cf510d94bb2d9ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:28.818488 kubelet[3017]: I1108 00:05:28.818260 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e012f342be822a9cf510d94bb2d9ea4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-187\" (UID: \"3e012f342be822a9cf510d94bb2d9ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:28.818488 kubelet[3017]: I1108 00:05:28.818302 3017 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:28.818697 kubelet[3017]: I1108 00:05:28.818517 3017 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-187" Nov 8 00:05:28.819426 kubelet[3017]: E1108 00:05:28.819387 3017 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.187:6443/api/v1/nodes\": dial tcp 172.31.28.187:6443: connect: connection refused" node="ip-172-31-28-187" Nov 8 00:05:28.824905 kubelet[3017]: E1108 00:05:28.824858 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-187?timeout=10s\": dial tcp 172.31.28.187:6443: connect: connection refused" interval="400ms" Nov 8 00:05:29.021971 kubelet[3017]: I1108 00:05:29.021738 3017 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-187" Nov 8 00:05:29.022335 kubelet[3017]: E1108 00:05:29.022261 3017 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.187:6443/api/v1/nodes\": dial tcp 172.31.28.187:6443: connect: connection refused" node="ip-172-31-28-187" Nov 8 00:05:29.061646 containerd[2134]: time="2025-11-08T00:05:29.061268217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-187,Uid:40d6661684b6a60cb66075405dddfa6d,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:29.062670 containerd[2134]: time="2025-11-08T00:05:29.062400849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-187,Uid:3e012f342be822a9cf510d94bb2d9ea4,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:29.068061 containerd[2134]: time="2025-11-08T00:05:29.067828749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-187,Uid:54dfacb1860fb1b075746674f9385909,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:29.225795 kubelet[3017]: E1108 00:05:29.225711 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-187?timeout=10s\": dial tcp 172.31.28.187:6443: connect: connection refused" interval="800ms" Nov 8 00:05:29.424277 kubelet[3017]: I1108 00:05:29.424216 3017 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-187" Nov 8 00:05:29.424770 kubelet[3017]: E1108 00:05:29.424725 3017 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.187:6443/api/v1/nodes\": dial tcp 172.31.28.187:6443: connect: connection refused" node="ip-172-31-28-187" Nov 8 00:05:29.472748 kubelet[3017]: W1108 00:05:29.472620 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:29.472932 kubelet[3017]: E1108 00:05:29.472756 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.187:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:29.570449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430644285.mount: Deactivated successfully. Nov 8 00:05:29.585586 containerd[2134]: time="2025-11-08T00:05:29.583910987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:29.591409 containerd[2134]: time="2025-11-08T00:05:29.591366215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Nov 8 00:05:29.593134 containerd[2134]: time="2025-11-08T00:05:29.593091695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:29.596433 containerd[2134]: time="2025-11-08T00:05:29.596358359Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:05:29.607474 containerd[2134]: time="2025-11-08T00:05:29.607421519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:05:29.607845 containerd[2134]: time="2025-11-08T00:05:29.607804859Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:29.614904 containerd[2134]: time="2025-11-08T00:05:29.614834123Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.89231ms" Nov 8 00:05:29.617050 containerd[2134]: time="2025-11-08T00:05:29.616999907Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:29.620444 containerd[2134]: time="2025-11-08T00:05:29.620385131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.895026ms" Nov 8 00:05:29.622760 containerd[2134]: time="2025-11-08T00:05:29.622666643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:05:29.634952 containerd[2134]: time="2025-11-08T00:05:29.634816487Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.441038ms" Nov 8 00:05:29.672836 kubelet[3017]: W1108 00:05:29.672766 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:29.673928 kubelet[3017]: E1108 00:05:29.673600 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.187:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:29.911205 containerd[2134]: time="2025-11-08T00:05:29.910821625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:29.911205 containerd[2134]: time="2025-11-08T00:05:29.910924237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:29.911205 containerd[2134]: time="2025-11-08T00:05:29.910955389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:29.914694 containerd[2134]: time="2025-11-08T00:05:29.914210557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:29.914694 containerd[2134]: time="2025-11-08T00:05:29.914303881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:29.914694 containerd[2134]: time="2025-11-08T00:05:29.914328949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:29.914694 containerd[2134]: time="2025-11-08T00:05:29.914484781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:29.915802 containerd[2134]: time="2025-11-08T00:05:29.915530737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:29.924254 containerd[2134]: time="2025-11-08T00:05:29.923864161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:29.924254 containerd[2134]: time="2025-11-08T00:05:29.923950765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:29.924254 containerd[2134]: time="2025-11-08T00:05:29.923976205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:29.924254 containerd[2134]: time="2025-11-08T00:05:29.924128905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:29.938842 kubelet[3017]: W1108 00:05:29.938771 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-187&limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:29.939278 kubelet[3017]: E1108 00:05:29.939147 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.187:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-187&limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:30.021595 kubelet[3017]: W1108 00:05:30.021497 3017 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.187:6443: connect: connection refused Nov 8 00:05:30.021772 kubelet[3017]: E1108 00:05:30.021614 3017 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.187:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.187:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:05:30.028180 kubelet[3017]: E1108 00:05:30.028105 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-187?timeout=10s\": dial tcp 172.31.28.187:6443: connect: connection refused" interval="1.6s" Nov 8 00:05:30.066759 containerd[2134]: time="2025-11-08T00:05:30.066691810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-187,Uid:40d6661684b6a60cb66075405dddfa6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab37493301114debfa5c6983e69a998cb114b993b2820dafbf04c9184ed722ee\"" Nov 8 00:05:30.082325 containerd[2134]: time="2025-11-08T00:05:30.081843658Z" level=info msg="CreateContainer within sandbox \"ab37493301114debfa5c6983e69a998cb114b993b2820dafbf04c9184ed722ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:05:30.089746 containerd[2134]: time="2025-11-08T00:05:30.089156182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-187,Uid:54dfacb1860fb1b075746674f9385909,Namespace:kube-system,Attempt:0,} returns sandbox id \"145a444fd072d39f3615a6bc889f37ea2026788e24ac650b346375114033e3bb\"" Nov 8 00:05:30.097127 containerd[2134]: time="2025-11-08T00:05:30.096957742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-187,Uid:3e012f342be822a9cf510d94bb2d9ea4,Namespace:kube-system,Attempt:0,} returns sandbox id \"783d168dd2aa39b64bb281023b41ea9f5a12381fe1e221eba5aa29d67e42d9de\"" Nov 8 00:05:30.100682 containerd[2134]: time="2025-11-08T00:05:30.100469818Z" level=info msg="CreateContainer within sandbox \"145a444fd072d39f3615a6bc889f37ea2026788e24ac650b346375114033e3bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:05:30.104155 containerd[2134]: time="2025-11-08T00:05:30.104043634Z" level=info msg="CreateContainer within sandbox \"783d168dd2aa39b64bb281023b41ea9f5a12381fe1e221eba5aa29d67e42d9de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:05:30.133528 containerd[2134]: time="2025-11-08T00:05:30.133305238Z" level=info msg="CreateContainer within sandbox \"ab37493301114debfa5c6983e69a998cb114b993b2820dafbf04c9184ed722ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f51fe1f4aca9cf135085e8172204f5d0ac21a91b3a75b91a52e5fc2989fc44a\"" Nov 8 00:05:30.134699 containerd[2134]: time="2025-11-08T00:05:30.134344018Z" level=info msg="StartContainer for \"0f51fe1f4aca9cf135085e8172204f5d0ac21a91b3a75b91a52e5fc2989fc44a\"" Nov 8 00:05:30.153143 containerd[2134]: time="2025-11-08T00:05:30.153087778Z" level=info msg="CreateContainer within sandbox \"145a444fd072d39f3615a6bc889f37ea2026788e24ac650b346375114033e3bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"773419e36f5d69778e680842e338e39b2bd7ba797cb58354c330f1a400f82e3b\"" Nov 8 00:05:30.154921 containerd[2134]: time="2025-11-08T00:05:30.154708546Z" level=info msg="StartContainer for \"773419e36f5d69778e680842e338e39b2bd7ba797cb58354c330f1a400f82e3b\"" Nov 8 00:05:30.162003 containerd[2134]: time="2025-11-08T00:05:30.161484898Z" level=info msg="CreateContainer within sandbox \"783d168dd2aa39b64bb281023b41ea9f5a12381fe1e221eba5aa29d67e42d9de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87f6dec73072794f53c24e02eaf689f3ea4ec7a6486d6602389d4daf095bbb96\"" Nov 8 00:05:30.167625 containerd[2134]: time="2025-11-08T00:05:30.165893758Z" level=info msg="StartContainer for \"87f6dec73072794f53c24e02eaf689f3ea4ec7a6486d6602389d4daf095bbb96\"" Nov 8 00:05:30.235245 kubelet[3017]: I1108 00:05:30.235196 3017 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-187" Nov 8 00:05:30.236474 kubelet[3017]: E1108 00:05:30.236397 3017 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.187:6443/api/v1/nodes\": dial tcp 172.31.28.187:6443: connect: connection refused" node="ip-172-31-28-187" Nov 8 00:05:30.320443 containerd[2134]: time="2025-11-08T00:05:30.320389283Z" level=info msg="StartContainer for \"0f51fe1f4aca9cf135085e8172204f5d0ac21a91b3a75b91a52e5fc2989fc44a\" returns successfully" Nov 8 00:05:30.392766 containerd[2134]: time="2025-11-08T00:05:30.392175515Z" level=info msg="StartContainer for \"87f6dec73072794f53c24e02eaf689f3ea4ec7a6486d6602389d4daf095bbb96\" returns successfully" Nov 8 00:05:30.411683 containerd[2134]: time="2025-11-08T00:05:30.410864843Z" level=info msg="StartContainer for \"773419e36f5d69778e680842e338e39b2bd7ba797cb58354c330f1a400f82e3b\" returns successfully" Nov 8 00:05:30.702533 kubelet[3017]: E1108 00:05:30.702492 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:30.715579 kubelet[3017]: E1108 00:05:30.713379 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:30.721014 kubelet[3017]: E1108 00:05:30.720731 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:31.724728 kubelet[3017]: E1108 00:05:31.724153 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:31.728580 kubelet[3017]: E1108 00:05:31.726943 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:31.734596 kubelet[3017]: E1108 00:05:31.731788 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:31.840577 kubelet[3017]: I1108 00:05:31.839098 3017 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-187" Nov 8 00:05:32.722581 kubelet[3017]: E1108 00:05:32.721781 3017 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:34.594071 kubelet[3017]: I1108 00:05:34.593969 3017 apiserver.go:52] "Watching apiserver" Nov 8 00:05:34.687954 kubelet[3017]: E1108 00:05:34.687875 3017 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-187\" not found" node="ip-172-31-28-187" Nov 8 00:05:34.716504 kubelet[3017]: I1108 00:05:34.716445 3017 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:05:34.724854 kubelet[3017]: I1108 00:05:34.723125 3017 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-187" Nov 8 00:05:34.816935 kubelet[3017]: I1108 00:05:34.816890 3017 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:34.910209 kubelet[3017]: E1108 00:05:34.910082 3017 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-187\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:34.910209 kubelet[3017]: I1108 00:05:34.910149 3017 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:34.925607 kubelet[3017]: E1108 00:05:34.925198 3017 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-187\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:34.925607 kubelet[3017]: I1108 00:05:34.925248 3017 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-187" Nov 8 00:05:34.929464 kubelet[3017]: E1108 00:05:34.929407 3017 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-187\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-187" Nov 8 00:05:36.315150 kubelet[3017]: I1108 00:05:36.315102 3017 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:36.777912 systemd[1]: Reloading requested from client PID 3296 ('systemctl') (unit session-7.scope)... Nov 8 00:05:36.777944 systemd[1]: Reloading... Nov 8 00:05:36.959631 zram_generator::config[3351]: No configuration found. Nov 8 00:05:37.164515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:05:37.404822 systemd[1]: Reloading finished in 626 ms. Nov 8 00:05:37.460483 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:37.477518 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:05:37.479084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:37.489340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:05:37.849888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:05:37.866351 (kubelet)[3406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:05:37.972099 kubelet[3406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:05:37.972099 kubelet[3406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:05:37.972099 kubelet[3406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:05:37.972099 kubelet[3406]: I1108 00:05:37.971944 3406 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:05:37.990168 kubelet[3406]: I1108 00:05:37.989229 3406 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:05:37.990168 kubelet[3406]: I1108 00:05:37.989282 3406 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:05:37.991180 kubelet[3406]: I1108 00:05:37.991011 3406 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:05:37.993992 kubelet[3406]: I1108 00:05:37.993925 3406 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:05:37.999665 kubelet[3406]: I1108 00:05:37.999616 3406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:05:38.008286 kubelet[3406]: E1108 00:05:38.008231 3406 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:05:38.008286 kubelet[3406]: I1108 00:05:38.008286 3406 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:05:38.013714 kubelet[3406]: I1108 00:05:38.013622 3406 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:05:38.014684 kubelet[3406]: I1108 00:05:38.014575 3406 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:05:38.015086 kubelet[3406]: I1108 00:05:38.014631 3406 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-187","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:05:38.015086 kubelet[3406]: I1108 00:05:38.014956 3406 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:05:38.015086 kubelet[3406]: I1108 00:05:38.014976 3406 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:05:38.015086 kubelet[3406]: I1108 00:05:38.015046 3406 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:05:38.015421 kubelet[3406]: I1108 00:05:38.015307 3406 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:05:38.015421 kubelet[3406]: I1108 00:05:38.015330 3406 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:05:38.018933 kubelet[3406]: I1108 00:05:38.016728 3406 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:05:38.018933 kubelet[3406]: I1108 00:05:38.016780 3406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:05:38.035165 kubelet[3406]: I1108 00:05:38.035101 3406 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:05:38.037234 kubelet[3406]: I1108 00:05:38.036006 3406 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:05:38.037234 kubelet[3406]: I1108 00:05:38.036852 3406 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:05:38.037234 kubelet[3406]: I1108 00:05:38.036899 3406 server.go:1287] "Started kubelet" Nov 8 00:05:38.043584 kubelet[3406]: I1108 00:05:38.043327 3406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:05:38.053742 kubelet[3406]: I1108 00:05:38.053671 3406 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:05:38.056512 kubelet[3406]: I1108 00:05:38.056449 3406 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:05:38.058080 kubelet[3406]: I1108 00:05:38.057506 3406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:05:38.065628 kubelet[3406]: I1108 00:05:38.062439 3406 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:05:38.065628 kubelet[3406]: I1108 00:05:38.063222 3406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:05:38.066598 kubelet[3406]: I1108 00:05:38.066445 3406 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:05:38.067592 kubelet[3406]: I1108 00:05:38.066843 3406 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:05:38.067592 kubelet[3406]: I1108 00:05:38.067083 3406 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:05:38.067592 kubelet[3406]: E1108 00:05:38.067135 3406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-187\" not found" Nov 8 00:05:38.077634 kubelet[3406]: I1108 00:05:38.077089 3406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:05:38.079786 kubelet[3406]: I1108 00:05:38.079426 3406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:05:38.079786 kubelet[3406]: I1108 00:05:38.079481 3406 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:05:38.079786 kubelet[3406]: I1108 00:05:38.079515 3406 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:05:38.079786 kubelet[3406]: I1108 00:05:38.079529 3406 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:05:38.079786 kubelet[3406]: E1108 00:05:38.079620 3406 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:05:38.098274 kubelet[3406]: I1108 00:05:38.097819 3406 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:05:38.098274 kubelet[3406]: I1108 00:05:38.098038 3406 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:05:38.106605 kubelet[3406]: I1108 00:05:38.104144 3406 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:05:38.153030 kubelet[3406]: E1108 00:05:38.152944 3406 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:05:38.180345 kubelet[3406]: E1108 00:05:38.179893 3406 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:05:38.287310 kubelet[3406]: I1108 00:05:38.287259 3406 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:05:38.287310 kubelet[3406]: I1108 00:05:38.287296 3406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:05:38.287511 kubelet[3406]: I1108 00:05:38.287333 3406 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:05:38.288007 kubelet[3406]: I1108 00:05:38.287809 3406 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:05:38.288080 kubelet[3406]: I1108 00:05:38.288004 3406 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:05:38.288080 kubelet[3406]: I1108 00:05:38.288042 3406 policy_none.go:49] "None policy: Start" Nov 8 00:05:38.288080 kubelet[3406]: I1108 00:05:38.288061 3406 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:05:38.288248 kubelet[3406]: I1108 00:05:38.288084 3406 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:05:38.288313 kubelet[3406]: I1108 00:05:38.288283 3406 state_mem.go:75] "Updated machine memory state" Nov 8 00:05:38.292396 kubelet[3406]: I1108 00:05:38.292331 3406 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:05:38.292693 kubelet[3406]: I1108 00:05:38.292660 3406 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:05:38.292787 kubelet[3406]: I1108 00:05:38.292698 3406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:05:38.294585 kubelet[3406]: I1108 00:05:38.294537 3406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:05:38.300993 kubelet[3406]: E1108 00:05:38.300943 3406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:05:38.381615 kubelet[3406]: I1108 00:05:38.381534 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:38.383187 kubelet[3406]: I1108 00:05:38.383115 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:38.384200 kubelet[3406]: I1108 00:05:38.383695 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-187" Nov 8 00:05:38.397805 kubelet[3406]: E1108 00:05:38.397748 3406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-187\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:38.414603 kubelet[3406]: I1108 00:05:38.414422 3406 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-187" Nov 8 00:05:38.426777 kubelet[3406]: I1108 00:05:38.426258 3406 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-187" Nov 8 00:05:38.426777 kubelet[3406]: I1108 00:05:38.426379 3406 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-187" Nov 8 00:05:38.471754 kubelet[3406]: I1108 00:05:38.471680 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:38.472055 kubelet[3406]: I1108 00:05:38.472018 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:38.472344 kubelet[3406]: I1108 00:05:38.472292 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54dfacb1860fb1b075746674f9385909-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-187\" (UID: \"54dfacb1860fb1b075746674f9385909\") " pod="kube-system/kube-scheduler-ip-172-31-28-187" Nov 8 00:05:38.472829 kubelet[3406]: I1108 00:05:38.472492 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e012f342be822a9cf510d94bb2d9ea4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-187\" (UID: \"3e012f342be822a9cf510d94bb2d9ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:38.472829 kubelet[3406]: I1108 00:05:38.472537 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:38.472829 kubelet[3406]: I1108 00:05:38.472595 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:38.472829 kubelet[3406]: I1108 00:05:38.472633 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e012f342be822a9cf510d94bb2d9ea4-ca-certs\") pod \"kube-apiserver-ip-172-31-28-187\" (UID: \"3e012f342be822a9cf510d94bb2d9ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:38.472829 kubelet[3406]: I1108 00:05:38.472672 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e012f342be822a9cf510d94bb2d9ea4-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-187\" (UID: \"3e012f342be822a9cf510d94bb2d9ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-187" Nov 8 00:05:38.473109 kubelet[3406]: I1108 00:05:38.472738 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40d6661684b6a60cb66075405dddfa6d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-187\" (UID: \"40d6661684b6a60cb66075405dddfa6d\") " pod="kube-system/kube-controller-manager-ip-172-31-28-187" Nov 8 00:05:39.007078 update_engine[2111]: I20251108 00:05:39.006408 2111 update_attempter.cc:509] Updating boot flags... Nov 8 00:05:39.024034 kubelet[3406]: I1108 00:05:39.019071 3406 apiserver.go:52] "Watching apiserver" Nov 8 00:05:39.068404 kubelet[3406]: I1108 00:05:39.068329 3406 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:05:39.146751 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3456) Nov 8 00:05:39.351783 kubelet[3406]: I1108 00:05:39.350501 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-187" podStartSLOduration=1.350476112 podStartE2EDuration="1.350476112s" podCreationTimestamp="2025-11-08 00:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:39.345283076 +0000 UTC m=+1.471148673" watchObservedRunningTime="2025-11-08 00:05:39.350476112 +0000 UTC m=+1.476341685" Nov 8 00:05:39.351783 kubelet[3406]: I1108 00:05:39.350688 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-187" podStartSLOduration=1.350676908 podStartE2EDuration="1.350676908s" podCreationTimestamp="2025-11-08 00:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:39.319517852 +0000 UTC m=+1.445383425" watchObservedRunningTime="2025-11-08 00:05:39.350676908 +0000 UTC m=+1.476542481" Nov 8 00:05:39.417589 kubelet[3406]: I1108 00:05:39.414987 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-187" podStartSLOduration=3.414967424 podStartE2EDuration="3.414967424s" podCreationTimestamp="2025-11-08 00:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:39.371374964 +0000 UTC m=+1.497240549" watchObservedRunningTime="2025-11-08 00:05:39.414967424 +0000 UTC m=+1.540833009" Nov 8 00:05:39.798607 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3458) Nov 8 00:05:41.850729 kubelet[3406]: I1108 00:05:41.850615 3406 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:05:41.851769 containerd[2134]: time="2025-11-08T00:05:41.851614344Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:05:41.852303 kubelet[3406]: I1108 00:05:41.852118 3406 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:05:42.720441 kubelet[3406]: I1108 00:05:42.720336 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa40ae5d-dea1-4e34-9f20-c384b7505624-kube-proxy\") pod \"kube-proxy-qwnwq\" (UID: \"aa40ae5d-dea1-4e34-9f20-c384b7505624\") " pod="kube-system/kube-proxy-qwnwq" Nov 8 00:05:42.720800 kubelet[3406]: I1108 00:05:42.720668 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa40ae5d-dea1-4e34-9f20-c384b7505624-xtables-lock\") pod \"kube-proxy-qwnwq\" (UID: \"aa40ae5d-dea1-4e34-9f20-c384b7505624\") " pod="kube-system/kube-proxy-qwnwq" Nov 8 00:05:42.721592 kubelet[3406]: I1108 00:05:42.720932 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa40ae5d-dea1-4e34-9f20-c384b7505624-lib-modules\") pod \"kube-proxy-qwnwq\" (UID: \"aa40ae5d-dea1-4e34-9f20-c384b7505624\") " pod="kube-system/kube-proxy-qwnwq" Nov 8 00:05:42.826005 kubelet[3406]: I1108 00:05:42.821995 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9d2\" (UniqueName: \"kubernetes.io/projected/aa40ae5d-dea1-4e34-9f20-c384b7505624-kube-api-access-fx9d2\") pod \"kube-proxy-qwnwq\" (UID: \"aa40ae5d-dea1-4e34-9f20-c384b7505624\") " pod="kube-system/kube-proxy-qwnwq" Nov 8 00:05:42.967937 containerd[2134]: time="2025-11-08T00:05:42.967873790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwnwq,Uid:aa40ae5d-dea1-4e34-9f20-c384b7505624,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:43.052580 containerd[2134]: time="2025-11-08T00:05:43.051627106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:43.052580 containerd[2134]: time="2025-11-08T00:05:43.051742294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:43.052580 containerd[2134]: time="2025-11-08T00:05:43.051808474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:43.052580 containerd[2134]: time="2025-11-08T00:05:43.052024702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:43.174990 containerd[2134]: time="2025-11-08T00:05:43.174932735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwnwq,Uid:aa40ae5d-dea1-4e34-9f20-c384b7505624,Namespace:kube-system,Attempt:0,} returns sandbox id \"293adfd488f3257658b56216a2b8aea8703596b1d50528008df363b6218eb55a\"" Nov 8 00:05:43.181822 containerd[2134]: time="2025-11-08T00:05:43.181756595Z" level=info msg="CreateContainer within sandbox \"293adfd488f3257658b56216a2b8aea8703596b1d50528008df363b6218eb55a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:05:43.214091 containerd[2134]: time="2025-11-08T00:05:43.214037675Z" level=info msg="CreateContainer within sandbox \"293adfd488f3257658b56216a2b8aea8703596b1d50528008df363b6218eb55a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8453d2834614cc7518ceee4255efb1fd83422fc45f113136e6b601e66b55549\"" Nov 8 00:05:43.215658 containerd[2134]: time="2025-11-08T00:05:43.215133839Z" level=info msg="StartContainer for \"a8453d2834614cc7518ceee4255efb1fd83422fc45f113136e6b601e66b55549\"" Nov 8 00:05:43.230851 kubelet[3406]: I1108 00:05:43.230332 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30a89200-25f3-429d-b0c3-e0692168d038-var-lib-calico\") pod \"tigera-operator-7dcd859c48-99hvp\" (UID: \"30a89200-25f3-429d-b0c3-e0692168d038\") " pod="tigera-operator/tigera-operator-7dcd859c48-99hvp" Nov 8 00:05:43.230851 kubelet[3406]: I1108 00:05:43.230752 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7lzn\" (UniqueName: \"kubernetes.io/projected/30a89200-25f3-429d-b0c3-e0692168d038-kube-api-access-b7lzn\") pod \"tigera-operator-7dcd859c48-99hvp\" (UID: \"30a89200-25f3-429d-b0c3-e0692168d038\") " pod="tigera-operator/tigera-operator-7dcd859c48-99hvp" Nov 8 00:05:43.332924 containerd[2134]: time="2025-11-08T00:05:43.330135504Z" level=info msg="StartContainer for \"a8453d2834614cc7518ceee4255efb1fd83422fc45f113136e6b601e66b55549\" returns successfully" Nov 8 00:05:43.401026 containerd[2134]: time="2025-11-08T00:05:43.400953048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-99hvp,Uid:30a89200-25f3-429d-b0c3-e0692168d038,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:05:43.455655 containerd[2134]: time="2025-11-08T00:05:43.453325548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:43.455655 containerd[2134]: time="2025-11-08T00:05:43.453437952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:43.455655 containerd[2134]: time="2025-11-08T00:05:43.453464280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:43.455655 containerd[2134]: time="2025-11-08T00:05:43.453821904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:43.582178 containerd[2134]: time="2025-11-08T00:05:43.581954797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-99hvp,Uid:30a89200-25f3-429d-b0c3-e0692168d038,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d8f297df3162929d321336c0655350ba31279e7fb58916bf3ca1ba271c6549a8\"" Nov 8 00:05:43.588938 containerd[2134]: time="2025-11-08T00:05:43.588142705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:05:44.935627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151068813.mount: Deactivated successfully. Nov 8 00:05:45.647119 containerd[2134]: time="2025-11-08T00:05:45.645625863Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:45.647119 containerd[2134]: time="2025-11-08T00:05:45.646646403Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 8 00:05:45.648181 containerd[2134]: time="2025-11-08T00:05:45.648132027Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:45.654612 containerd[2134]: time="2025-11-08T00:05:45.654518811Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:45.658582 containerd[2134]: time="2025-11-08T00:05:45.658484247Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.070276022s" Nov 8 00:05:45.658826 containerd[2134]: time="2025-11-08T00:05:45.658789863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 8 00:05:45.668354 containerd[2134]: time="2025-11-08T00:05:45.668300907Z" level=info msg="CreateContainer within sandbox \"d8f297df3162929d321336c0655350ba31279e7fb58916bf3ca1ba271c6549a8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:05:45.687029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908276515.mount: Deactivated successfully. Nov 8 00:05:45.690265 containerd[2134]: time="2025-11-08T00:05:45.690201867Z" level=info msg="CreateContainer within sandbox \"d8f297df3162929d321336c0655350ba31279e7fb58916bf3ca1ba271c6549a8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"96a78d9ff47bc26dfe16225ec7f7ca53c44f28713bebdc2dbc8495a94f897dc1\"" Nov 8 00:05:45.693660 containerd[2134]: time="2025-11-08T00:05:45.692116587Z" level=info msg="StartContainer for \"96a78d9ff47bc26dfe16225ec7f7ca53c44f28713bebdc2dbc8495a94f897dc1\"" Nov 8 00:05:45.785910 kubelet[3406]: I1108 00:05:45.784840 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qwnwq" podStartSLOduration=3.7848175360000003 podStartE2EDuration="3.784817536s" podCreationTimestamp="2025-11-08 00:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:44.273153564 +0000 UTC m=+6.399019161" watchObservedRunningTime="2025-11-08 00:05:45.784817536 +0000 UTC m=+7.910683097" Nov 8 00:05:45.805855 containerd[2134]: time="2025-11-08T00:05:45.805799956Z" level=info msg="StartContainer for \"96a78d9ff47bc26dfe16225ec7f7ca53c44f28713bebdc2dbc8495a94f897dc1\" returns successfully" Nov 8 00:05:47.715604 kubelet[3406]: I1108 00:05:47.714142 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-99hvp" podStartSLOduration=3.638555871 podStartE2EDuration="5.714118361s" podCreationTimestamp="2025-11-08 00:05:42 +0000 UTC" firstStartedPulling="2025-11-08 00:05:43.586846573 +0000 UTC m=+5.712712134" lastFinishedPulling="2025-11-08 00:05:45.662409063 +0000 UTC m=+7.788274624" observedRunningTime="2025-11-08 00:05:46.320323706 +0000 UTC m=+8.446189315" watchObservedRunningTime="2025-11-08 00:05:47.714118361 +0000 UTC m=+9.839983934" Nov 8 00:05:52.426880 sudo[2490]: pam_unix(sudo:session): session closed for user root Nov 8 00:05:52.451162 sshd[2486]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:52.457538 systemd-logind[2107]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:05:52.458548 systemd[1]: sshd@6-172.31.28.187:22-139.178.89.65:33132.service: Deactivated successfully. Nov 8 00:05:52.467090 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:05:52.470702 systemd-logind[2107]: Removed session 7. Nov 8 00:06:10.924197 kubelet[3406]: I1108 00:06:10.924131 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bb650cc-4823-4032-aaac-08041cd7f200-tigera-ca-bundle\") pod \"calico-typha-6d9665d699-qxx8d\" (UID: \"1bb650cc-4823-4032-aaac-08041cd7f200\") " pod="calico-system/calico-typha-6d9665d699-qxx8d" Nov 8 00:06:10.926402 kubelet[3406]: I1108 00:06:10.924248 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1bb650cc-4823-4032-aaac-08041cd7f200-typha-certs\") pod \"calico-typha-6d9665d699-qxx8d\" (UID: \"1bb650cc-4823-4032-aaac-08041cd7f200\") " pod="calico-system/calico-typha-6d9665d699-qxx8d" Nov 8 00:06:10.926402 kubelet[3406]: I1108 00:06:10.924294 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxhh\" (UniqueName: \"kubernetes.io/projected/1bb650cc-4823-4032-aaac-08041cd7f200-kube-api-access-rpxhh\") pod \"calico-typha-6d9665d699-qxx8d\" (UID: \"1bb650cc-4823-4032-aaac-08041cd7f200\") " pod="calico-system/calico-typha-6d9665d699-qxx8d" Nov 8 00:06:11.093384 containerd[2134]: time="2025-11-08T00:06:11.093311281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d9665d699-qxx8d,Uid:1bb650cc-4823-4032-aaac-08041cd7f200,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:11.104533 kubelet[3406]: E1108 00:06:11.104453 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:11.128928 kubelet[3406]: I1108 00:06:11.128137 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-cni-log-dir\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.128928 kubelet[3406]: I1108 00:06:11.128220 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-xtables-lock\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.128928 kubelet[3406]: I1108 00:06:11.128266 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-var-run-calico\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.128928 kubelet[3406]: I1108 00:06:11.128307 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-flexvol-driver-host\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.128928 kubelet[3406]: I1108 00:06:11.128367 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zrr\" (UniqueName: \"kubernetes.io/projected/dd2b7391-99be-4a7a-85d0-32561f1f5276-kube-api-access-29zrr\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.129306 kubelet[3406]: I1108 00:06:11.128418 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-cni-bin-dir\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.129306 kubelet[3406]: I1108 00:06:11.128456 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-cni-net-dir\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.129306 kubelet[3406]: I1108 00:06:11.128495 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd2b7391-99be-4a7a-85d0-32561f1f5276-node-certs\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.129306 kubelet[3406]: I1108 00:06:11.128533 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-policysync\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.129306 kubelet[3406]: I1108 00:06:11.128609 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-lib-modules\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.129607 kubelet[3406]: I1108 00:06:11.128648 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd2b7391-99be-4a7a-85d0-32561f1f5276-var-lib-calico\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.129607 kubelet[3406]: I1108 00:06:11.128687 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd2b7391-99be-4a7a-85d0-32561f1f5276-tigera-ca-bundle\") pod \"calico-node-wkt44\" (UID: \"dd2b7391-99be-4a7a-85d0-32561f1f5276\") " pod="calico-system/calico-node-wkt44" Nov 8 00:06:11.208529 containerd[2134]: time="2025-11-08T00:06:11.208248158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:11.212508 containerd[2134]: time="2025-11-08T00:06:11.210673694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:11.224108 containerd[2134]: time="2025-11-08T00:06:11.220660586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:11.224108 containerd[2134]: time="2025-11-08T00:06:11.220883342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:11.230867 kubelet[3406]: I1108 00:06:11.229305 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5962793e-cd47-45ea-84d0-190de5cbdb54-socket-dir\") pod \"csi-node-driver-tw22z\" (UID: \"5962793e-cd47-45ea-84d0-190de5cbdb54\") " pod="calico-system/csi-node-driver-tw22z" Nov 8 00:06:11.236616 kubelet[3406]: I1108 00:06:11.234845 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5962793e-cd47-45ea-84d0-190de5cbdb54-registration-dir\") pod \"csi-node-driver-tw22z\" (UID: \"5962793e-cd47-45ea-84d0-190de5cbdb54\") " pod="calico-system/csi-node-driver-tw22z" Nov 8 00:06:11.236616 kubelet[3406]: I1108 00:06:11.236448 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lp72\" (UniqueName: \"kubernetes.io/projected/5962793e-cd47-45ea-84d0-190de5cbdb54-kube-api-access-6lp72\") pod \"csi-node-driver-tw22z\" (UID: \"5962793e-cd47-45ea-84d0-190de5cbdb54\") " pod="calico-system/csi-node-driver-tw22z" Nov 8 00:06:11.239901 kubelet[3406]: I1108 00:06:11.239047 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5962793e-cd47-45ea-84d0-190de5cbdb54-kubelet-dir\") pod \"csi-node-driver-tw22z\" (UID: \"5962793e-cd47-45ea-84d0-190de5cbdb54\") " pod="calico-system/csi-node-driver-tw22z" Nov 8 00:06:11.241481 kubelet[3406]: I1108 00:06:11.240795 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5962793e-cd47-45ea-84d0-190de5cbdb54-varrun\") pod \"csi-node-driver-tw22z\" (UID: \"5962793e-cd47-45ea-84d0-190de5cbdb54\") " pod="calico-system/csi-node-driver-tw22z" Nov 8 00:06:11.281370 kubelet[3406]: E1108 00:06:11.278532 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.281370 kubelet[3406]: W1108 00:06:11.278644 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.281370 kubelet[3406]: E1108 00:06:11.278700 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.283170 kubelet[3406]: E1108 00:06:11.283131 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.286980 kubelet[3406]: W1108 00:06:11.286827 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.287629 kubelet[3406]: E1108 00:06:11.287293 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.291119 kubelet[3406]: E1108 00:06:11.289524 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.291119 kubelet[3406]: W1108 00:06:11.289593 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.291119 kubelet[3406]: E1108 00:06:11.289629 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.293612 kubelet[3406]: E1108 00:06:11.291640 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.293612 kubelet[3406]: W1108 00:06:11.291676 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.293612 kubelet[3406]: E1108 00:06:11.291711 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.310789 kubelet[3406]: E1108 00:06:11.310743 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.311026 kubelet[3406]: W1108 00:06:11.310997 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.311336 kubelet[3406]: E1108 00:06:11.311122 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.311675 kubelet[3406]: E1108 00:06:11.311655 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.311792 kubelet[3406]: W1108 00:06:11.311771 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.311893 kubelet[3406]: E1108 00:06:11.311873 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.312238 kubelet[3406]: E1108 00:06:11.312219 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.312355 kubelet[3406]: W1108 00:06:11.312332 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.312454 kubelet[3406]: E1108 00:06:11.312433 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.313373 kubelet[3406]: E1108 00:06:11.313346 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.313537 kubelet[3406]: W1108 00:06:11.313514 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.314352 kubelet[3406]: E1108 00:06:11.314318 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.314964 kubelet[3406]: E1108 00:06:11.314935 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.315894 kubelet[3406]: W1108 00:06:11.315632 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.315894 kubelet[3406]: E1108 00:06:11.315692 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.318037 kubelet[3406]: E1108 00:06:11.317999 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.323355 kubelet[3406]: W1108 00:06:11.323317 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.324314 kubelet[3406]: E1108 00:06:11.324244 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.327376 kubelet[3406]: E1108 00:06:11.326697 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.327376 kubelet[3406]: W1108 00:06:11.327123 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.328639 kubelet[3406]: E1108 00:06:11.327841 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.328639 kubelet[3406]: W1108 00:06:11.327880 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.328639 kubelet[3406]: E1108 00:06:11.328188 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.328639 kubelet[3406]: W1108 00:06:11.328205 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.328639 kubelet[3406]: E1108 00:06:11.328456 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.328639 kubelet[3406]: W1108 00:06:11.328469 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.329018 kubelet[3406]: E1108 00:06:11.328841 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.329018 kubelet[3406]: W1108 00:06:11.328859 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.330279 kubelet[3406]: E1108 00:06:11.329118 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.330279 kubelet[3406]: W1108 00:06:11.329679 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.330279 kubelet[3406]: E1108 00:06:11.329740 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.330279 kubelet[3406]: E1108 00:06:11.329797 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.330279 kubelet[3406]: E1108 00:06:11.330234 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.334425 kubelet[3406]: E1108 00:06:11.331023 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.334425 kubelet[3406]: W1108 00:06:11.331081 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.334425 kubelet[3406]: E1108 00:06:11.331114 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.334425 kubelet[3406]: E1108 00:06:11.331152 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.334425 kubelet[3406]: E1108 00:06:11.331185 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.334425 kubelet[3406]: E1108 00:06:11.331991 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.334425 kubelet[3406]: E1108 00:06:11.332989 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.334425 kubelet[3406]: W1108 00:06:11.333138 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.334425 kubelet[3406]: E1108 00:06:11.333175 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.335087 kubelet[3406]: E1108 00:06:11.334665 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.335087 kubelet[3406]: W1108 00:06:11.334692 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.335087 kubelet[3406]: E1108 00:06:11.334724 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.335747 kubelet[3406]: E1108 00:06:11.335714 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.335747 kubelet[3406]: W1108 00:06:11.335745 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.336210 kubelet[3406]: E1108 00:06:11.335876 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.337023 kubelet[3406]: E1108 00:06:11.336974 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.338636 kubelet[3406]: W1108 00:06:11.337010 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.338636 kubelet[3406]: E1108 00:06:11.337153 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.338636 kubelet[3406]: E1108 00:06:11.337895 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.338636 kubelet[3406]: W1108 00:06:11.337917 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.338636 kubelet[3406]: E1108 00:06:11.337974 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.340773 kubelet[3406]: E1108 00:06:11.340728 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.340773 kubelet[3406]: W1108 00:06:11.340764 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.340989 kubelet[3406]: E1108 00:06:11.340795 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.345630 kubelet[3406]: E1108 00:06:11.344883 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.346682 kubelet[3406]: W1108 00:06:11.344918 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.346953 kubelet[3406]: E1108 00:06:11.346884 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.348063 kubelet[3406]: E1108 00:06:11.348021 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.348063 kubelet[3406]: W1108 00:06:11.348055 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.348063 kubelet[3406]: E1108 00:06:11.348101 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.348499 kubelet[3406]: E1108 00:06:11.348465 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.348499 kubelet[3406]: W1108 00:06:11.348494 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.348755 kubelet[3406]: E1108 00:06:11.348730 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.349274 kubelet[3406]: E1108 00:06:11.348863 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.349274 kubelet[3406]: W1108 00:06:11.348890 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.349274 kubelet[3406]: E1108 00:06:11.348924 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.349938 kubelet[3406]: E1108 00:06:11.349254 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.349938 kubelet[3406]: W1108 00:06:11.349746 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.350299 kubelet[3406]: E1108 00:06:11.349789 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.350299 kubelet[3406]: E1108 00:06:11.350236 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.350299 kubelet[3406]: W1108 00:06:11.350256 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.350299 kubelet[3406]: E1108 00:06:11.350280 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.351285 kubelet[3406]: E1108 00:06:11.350860 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.351285 kubelet[3406]: W1108 00:06:11.350893 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.351738 kubelet[3406]: E1108 00:06:11.351469 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.352252 kubelet[3406]: E1108 00:06:11.351978 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.352252 kubelet[3406]: W1108 00:06:11.352005 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.352252 kubelet[3406]: E1108 00:06:11.352175 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.354892 kubelet[3406]: E1108 00:06:11.354838 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.354892 kubelet[3406]: W1108 00:06:11.354879 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.355369 kubelet[3406]: E1108 00:06:11.354943 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.356806 kubelet[3406]: E1108 00:06:11.356765 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.356806 kubelet[3406]: W1108 00:06:11.356800 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.357199 kubelet[3406]: E1108 00:06:11.356939 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.358030 kubelet[3406]: E1108 00:06:11.357956 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.358030 kubelet[3406]: W1108 00:06:11.357982 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.358477 kubelet[3406]: E1108 00:06:11.358333 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.359749 kubelet[3406]: E1108 00:06:11.359701 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.360122 kubelet[3406]: W1108 00:06:11.359909 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.360350 kubelet[3406]: E1108 00:06:11.360304 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.362746 kubelet[3406]: E1108 00:06:11.362491 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.362746 kubelet[3406]: W1108 00:06:11.362523 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.363380 kubelet[3406]: E1108 00:06:11.363350 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.363803 kubelet[3406]: E1108 00:06:11.363666 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.363803 kubelet[3406]: W1108 00:06:11.363767 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.364226 kubelet[3406]: E1108 00:06:11.364062 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.364641 kubelet[3406]: E1108 00:06:11.364533 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.364641 kubelet[3406]: W1108 00:06:11.364609 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.364959 kubelet[3406]: E1108 00:06:11.364885 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.365503 kubelet[3406]: E1108 00:06:11.365363 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.365503 kubelet[3406]: W1108 00:06:11.365384 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.365960 kubelet[3406]: E1108 00:06:11.365795 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.366720 kubelet[3406]: E1108 00:06:11.366683 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.366993 kubelet[3406]: W1108 00:06:11.366841 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.366993 kubelet[3406]: E1108 00:06:11.366925 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.368158 kubelet[3406]: E1108 00:06:11.367862 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.368158 kubelet[3406]: W1108 00:06:11.367889 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.368823 kubelet[3406]: E1108 00:06:11.368506 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.370042 kubelet[3406]: E1108 00:06:11.369816 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.370042 kubelet[3406]: W1108 00:06:11.369878 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.370977 kubelet[3406]: E1108 00:06:11.370880 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.371235 kubelet[3406]: W1108 00:06:11.371062 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.371627 kubelet[3406]: E1108 00:06:11.371608 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.371799 kubelet[3406]: W1108 00:06:11.371711 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.372006 kubelet[3406]: E1108 00:06:11.371869 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.372498 kubelet[3406]: E1108 00:06:11.372191 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.372498 kubelet[3406]: E1108 00:06:11.372252 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.373013 kubelet[3406]: E1108 00:06:11.372891 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.373013 kubelet[3406]: W1108 00:06:11.372910 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.374615 kubelet[3406]: E1108 00:06:11.373658 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.376904 kubelet[3406]: E1108 00:06:11.376867 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.377248 kubelet[3406]: W1108 00:06:11.377214 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.379044 kubelet[3406]: E1108 00:06:11.378762 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.381953 kubelet[3406]: E1108 00:06:11.381919 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.382445 kubelet[3406]: W1108 00:06:11.382089 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.382445 kubelet[3406]: E1108 00:06:11.382127 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.383785 kubelet[3406]: E1108 00:06:11.383594 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.383785 kubelet[3406]: W1108 00:06:11.383622 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.383785 kubelet[3406]: E1108 00:06:11.383649 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.394429 kubelet[3406]: E1108 00:06:11.394042 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:11.394429 kubelet[3406]: W1108 00:06:11.394180 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:11.394429 kubelet[3406]: E1108 00:06:11.394240 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:11.411995 containerd[2134]: time="2025-11-08T00:06:11.411945195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d9665d699-qxx8d,Uid:1bb650cc-4823-4032-aaac-08041cd7f200,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc02ea3e7fa85a385086be163d3cddb945b141ac02ec9abdb8e2d8d8eb246907\"" Nov 8 00:06:11.415615 containerd[2134]: time="2025-11-08T00:06:11.415538007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:06:11.577201 containerd[2134]: time="2025-11-08T00:06:11.576982420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wkt44,Uid:dd2b7391-99be-4a7a-85d0-32561f1f5276,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:11.624201 containerd[2134]: time="2025-11-08T00:06:11.623526196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:11.624201 containerd[2134]: time="2025-11-08T00:06:11.623791012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:11.624201 containerd[2134]: time="2025-11-08T00:06:11.623827936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:11.624201 containerd[2134]: time="2025-11-08T00:06:11.624036448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:11.687378 containerd[2134]: time="2025-11-08T00:06:11.687220564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wkt44,Uid:dd2b7391-99be-4a7a-85d0-32561f1f5276,Namespace:calico-system,Attempt:0,} returns sandbox id \"2847a2f8347481018f5ed6ddba3be185e94342f904d6f9e59d811ccacf08808b\"" Nov 8 00:06:12.740354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123210522.mount: Deactivated successfully. Nov 8 00:06:13.080007 kubelet[3406]: E1108 00:06:13.079860 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:13.534082 containerd[2134]: time="2025-11-08T00:06:13.534024186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:13.536330 containerd[2134]: time="2025-11-08T00:06:13.535995486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 8 00:06:13.538750 containerd[2134]: time="2025-11-08T00:06:13.538683738Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:13.543714 containerd[2134]: time="2025-11-08T00:06:13.543660990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:13.545602 containerd[2134]: time="2025-11-08T00:06:13.544963026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.129093543s" Nov 8 00:06:13.545602 containerd[2134]: time="2025-11-08T00:06:13.545019810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 8 00:06:13.547543 containerd[2134]: time="2025-11-08T00:06:13.547480470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:06:13.573002 containerd[2134]: time="2025-11-08T00:06:13.572789502Z" level=info msg="CreateContainer within sandbox \"bc02ea3e7fa85a385086be163d3cddb945b141ac02ec9abdb8e2d8d8eb246907\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:06:13.606953 containerd[2134]: time="2025-11-08T00:06:13.606894462Z" level=info msg="CreateContainer within sandbox \"bc02ea3e7fa85a385086be163d3cddb945b141ac02ec9abdb8e2d8d8eb246907\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"05bc36bfbc8dcec0956d3f548c4edb136ce76b25664e7acf4095c3cb5632002f\"" Nov 8 00:06:13.608831 containerd[2134]: time="2025-11-08T00:06:13.607811970Z" level=info msg="StartContainer for \"05bc36bfbc8dcec0956d3f548c4edb136ce76b25664e7acf4095c3cb5632002f\"" Nov 8 00:06:13.748592 containerd[2134]: time="2025-11-08T00:06:13.747366475Z" level=info msg="StartContainer for \"05bc36bfbc8dcec0956d3f548c4edb136ce76b25664e7acf4095c3cb5632002f\" returns successfully" Nov 8 00:06:14.407360 kubelet[3406]: I1108 00:06:14.407213 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d9665d699-qxx8d" podStartSLOduration=2.274851723 podStartE2EDuration="4.407158674s" podCreationTimestamp="2025-11-08 00:06:10 +0000 UTC" firstStartedPulling="2025-11-08 00:06:11.414901227 +0000 UTC m=+33.540766800" lastFinishedPulling="2025-11-08 00:06:13.54720819 +0000 UTC m=+35.673073751" observedRunningTime="2025-11-08 00:06:14.405016386 +0000 UTC m=+36.530881959" watchObservedRunningTime="2025-11-08 00:06:14.407158674 +0000 UTC m=+36.533024259" Nov 8 00:06:14.468827 kubelet[3406]: E1108 00:06:14.468549 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.468827 kubelet[3406]: W1108 00:06:14.468613 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.468827 kubelet[3406]: E1108 00:06:14.468646 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.469716 kubelet[3406]: E1108 00:06:14.469687 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.469992 kubelet[3406]: W1108 00:06:14.469828 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.469992 kubelet[3406]: E1108 00:06:14.469907 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.470632 kubelet[3406]: E1108 00:06:14.470449 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.470632 kubelet[3406]: W1108 00:06:14.470473 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.470632 kubelet[3406]: E1108 00:06:14.470499 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.471375 kubelet[3406]: E1108 00:06:14.471166 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.471375 kubelet[3406]: W1108 00:06:14.471192 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.471375 kubelet[3406]: E1108 00:06:14.471216 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.471910 kubelet[3406]: E1108 00:06:14.471620 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.471910 kubelet[3406]: W1108 00:06:14.471639 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.471910 kubelet[3406]: E1108 00:06:14.471661 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.472605 kubelet[3406]: E1108 00:06:14.472467 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.472605 kubelet[3406]: W1108 00:06:14.472524 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.472906 kubelet[3406]: E1108 00:06:14.472778 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.473846 kubelet[3406]: E1108 00:06:14.473655 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.473846 kubelet[3406]: W1108 00:06:14.473686 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.473846 kubelet[3406]: E1108 00:06:14.473718 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.475362 kubelet[3406]: E1108 00:06:14.475098 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.475362 kubelet[3406]: W1108 00:06:14.475158 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.475362 kubelet[3406]: E1108 00:06:14.475199 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.476928 kubelet[3406]: E1108 00:06:14.476738 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.476928 kubelet[3406]: W1108 00:06:14.476772 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.476928 kubelet[3406]: E1108 00:06:14.476805 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.479599 kubelet[3406]: E1108 00:06:14.478197 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.479599 kubelet[3406]: W1108 00:06:14.478230 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.479599 kubelet[3406]: E1108 00:06:14.478279 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.480915 kubelet[3406]: E1108 00:06:14.480359 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.480915 kubelet[3406]: W1108 00:06:14.480392 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.480915 kubelet[3406]: E1108 00:06:14.480437 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.482730 kubelet[3406]: E1108 00:06:14.482477 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.482730 kubelet[3406]: W1108 00:06:14.482512 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.482730 kubelet[3406]: E1108 00:06:14.482544 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.485622 kubelet[3406]: E1108 00:06:14.485114 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.485622 kubelet[3406]: W1108 00:06:14.485150 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.485622 kubelet[3406]: E1108 00:06:14.485182 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.486513 kubelet[3406]: E1108 00:06:14.486313 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.486513 kubelet[3406]: W1108 00:06:14.486345 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.486513 kubelet[3406]: E1108 00:06:14.486376 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.487231 kubelet[3406]: E1108 00:06:14.487077 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.487231 kubelet[3406]: W1108 00:06:14.487106 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.487231 kubelet[3406]: E1108 00:06:14.487136 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.488170 kubelet[3406]: E1108 00:06:14.487946 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.488170 kubelet[3406]: W1108 00:06:14.487972 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.488170 kubelet[3406]: E1108 00:06:14.487998 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.488936 kubelet[3406]: E1108 00:06:14.488669 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.488936 kubelet[3406]: W1108 00:06:14.488694 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.488936 kubelet[3406]: E1108 00:06:14.488741 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.489523 kubelet[3406]: E1108 00:06:14.489312 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.489523 kubelet[3406]: W1108 00:06:14.489337 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.489523 kubelet[3406]: E1108 00:06:14.489374 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.489877 kubelet[3406]: E1108 00:06:14.489857 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.490090 kubelet[3406]: W1108 00:06:14.489970 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.490090 kubelet[3406]: E1108 00:06:14.490013 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.490746 kubelet[3406]: E1108 00:06:14.490510 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.490746 kubelet[3406]: W1108 00:06:14.490530 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.490746 kubelet[3406]: E1108 00:06:14.490624 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.491308 kubelet[3406]: E1108 00:06:14.491274 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.491498 kubelet[3406]: W1108 00:06:14.491404 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.491498 kubelet[3406]: E1108 00:06:14.491458 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.492115 kubelet[3406]: E1108 00:06:14.491989 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.492115 kubelet[3406]: W1108 00:06:14.492013 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.492347 kubelet[3406]: E1108 00:06:14.492250 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.494383 kubelet[3406]: E1108 00:06:14.494160 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.494383 kubelet[3406]: W1108 00:06:14.494191 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.494738 kubelet[3406]: E1108 00:06:14.494697 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.494846 kubelet[3406]: E1108 00:06:14.494704 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.494846 kubelet[3406]: W1108 00:06:14.494780 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.495123 kubelet[3406]: E1108 00:06:14.494951 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.495305 kubelet[3406]: E1108 00:06:14.495260 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.495305 kubelet[3406]: W1108 00:06:14.495295 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.495648 kubelet[3406]: E1108 00:06:14.495448 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.495860 kubelet[3406]: E1108 00:06:14.495833 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.495924 kubelet[3406]: W1108 00:06:14.495860 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.495924 kubelet[3406]: E1108 00:06:14.495892 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.496315 kubelet[3406]: E1108 00:06:14.496275 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.496315 kubelet[3406]: W1108 00:06:14.496302 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.496436 kubelet[3406]: E1108 00:06:14.496338 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.496898 kubelet[3406]: E1108 00:06:14.496869 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.497008 kubelet[3406]: W1108 00:06:14.496899 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.497214 kubelet[3406]: E1108 00:06:14.497071 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.497802 kubelet[3406]: E1108 00:06:14.497770 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.498125 kubelet[3406]: W1108 00:06:14.497917 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.498125 kubelet[3406]: E1108 00:06:14.497960 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.498435 kubelet[3406]: E1108 00:06:14.498415 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.498770 kubelet[3406]: W1108 00:06:14.498512 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.498770 kubelet[3406]: E1108 00:06:14.498617 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.499276 kubelet[3406]: E1108 00:06:14.499218 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.499504 kubelet[3406]: W1108 00:06:14.499363 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.499504 kubelet[3406]: E1108 00:06:14.499397 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.500018 kubelet[3406]: E1108 00:06:14.499908 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.500018 kubelet[3406]: W1108 00:06:14.499929 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.500018 kubelet[3406]: E1108 00:06:14.499969 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:14.500322 kubelet[3406]: E1108 00:06:14.500298 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:06:14.500386 kubelet[3406]: W1108 00:06:14.500322 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:06:14.500386 kubelet[3406]: E1108 00:06:14.500348 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:06:15.049128 containerd[2134]: time="2025-11-08T00:06:15.049051577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:15.051088 containerd[2134]: time="2025-11-08T00:06:15.050740001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 8 00:06:15.053413 containerd[2134]: time="2025-11-08T00:06:15.052875977Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:15.057655 containerd[2134]: time="2025-11-08T00:06:15.057599825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:15.058977 containerd[2134]: time="2025-11-08T00:06:15.058915529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.511372119s" Nov 8 00:06:15.059128 containerd[2134]: time="2025-11-08T00:06:15.058975601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 8 00:06:15.065486 containerd[2134]: time="2025-11-08T00:06:15.065424173Z" level=info msg="CreateContainer within sandbox \"2847a2f8347481018f5ed6ddba3be185e94342f904d6f9e59d811ccacf08808b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:06:15.081134 kubelet[3406]: E1108 00:06:15.080154 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:15.097162 containerd[2134]: time="2025-11-08T00:06:15.097086617Z" level=info msg="CreateContainer within sandbox \"2847a2f8347481018f5ed6ddba3be185e94342f904d6f9e59d811ccacf08808b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"09fc3fe3e4dbf900b766b17f926f0c3cf10d03c760e17440c95b7237554d61c3\"" Nov 8 00:06:15.099080 containerd[2134]: time="2025-11-08T00:06:15.098811437Z" level=info msg="StartContainer for \"09fc3fe3e4dbf900b766b17f926f0c3cf10d03c760e17440c95b7237554d61c3\"" Nov 8 00:06:15.228329 containerd[2134]: time="2025-11-08T00:06:15.228216318Z" level=info msg="StartContainer for \"09fc3fe3e4dbf900b766b17f926f0c3cf10d03c760e17440c95b7237554d61c3\" returns successfully" Nov 8 00:06:15.298751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09fc3fe3e4dbf900b766b17f926f0c3cf10d03c760e17440c95b7237554d61c3-rootfs.mount: Deactivated successfully. Nov 8 00:06:15.666714 containerd[2134]: time="2025-11-08T00:06:15.666319040Z" level=info msg="shim disconnected" id=09fc3fe3e4dbf900b766b17f926f0c3cf10d03c760e17440c95b7237554d61c3 namespace=k8s.io Nov 8 00:06:15.666714 containerd[2134]: time="2025-11-08T00:06:15.666392072Z" level=warning msg="cleaning up after shim disconnected" id=09fc3fe3e4dbf900b766b17f926f0c3cf10d03c760e17440c95b7237554d61c3 namespace=k8s.io Nov 8 00:06:15.666714 containerd[2134]: time="2025-11-08T00:06:15.666412712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:06:16.403985 containerd[2134]: time="2025-11-08T00:06:16.403891424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:06:17.081349 kubelet[3406]: E1108 00:06:17.080880 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:19.081065 kubelet[3406]: E1108 00:06:19.080171 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:19.362686 containerd[2134]: time="2025-11-08T00:06:19.361157914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:19.363584 containerd[2134]: time="2025-11-08T00:06:19.363517834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 8 00:06:19.365692 containerd[2134]: time="2025-11-08T00:06:19.365649623Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:19.372042 containerd[2134]: time="2025-11-08T00:06:19.371990651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:19.373421 containerd[2134]: time="2025-11-08T00:06:19.373341875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.969387571s" Nov 8 00:06:19.373421 containerd[2134]: time="2025-11-08T00:06:19.373398227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 8 00:06:19.380485 containerd[2134]: time="2025-11-08T00:06:19.380402795Z" level=info msg="CreateContainer within sandbox \"2847a2f8347481018f5ed6ddba3be185e94342f904d6f9e59d811ccacf08808b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:06:19.412206 containerd[2134]: time="2025-11-08T00:06:19.412101263Z" level=info msg="CreateContainer within sandbox \"2847a2f8347481018f5ed6ddba3be185e94342f904d6f9e59d811ccacf08808b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"615f24f71846f955a9cb1923ccfc6a4b22198c068c08d117f2718be1dccf8e09\"" Nov 8 00:06:19.417252 containerd[2134]: time="2025-11-08T00:06:19.414324551Z" level=info msg="StartContainer for \"615f24f71846f955a9cb1923ccfc6a4b22198c068c08d117f2718be1dccf8e09\"" Nov 8 00:06:19.536534 containerd[2134]: time="2025-11-08T00:06:19.536468627Z" level=info msg="StartContainer for \"615f24f71846f955a9cb1923ccfc6a4b22198c068c08d117f2718be1dccf8e09\" returns successfully" Nov 8 00:06:20.480005 kubelet[3406]: I1108 00:06:20.479193 3406 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:06:20.553320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-615f24f71846f955a9cb1923ccfc6a4b22198c068c08d117f2718be1dccf8e09-rootfs.mount: Deactivated successfully. Nov 8 00:06:20.750270 kubelet[3406]: I1108 00:06:20.748569 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjq8z\" (UniqueName: \"kubernetes.io/projected/19e663c5-ada4-41f4-b329-6d803ea3d32d-kube-api-access-gjq8z\") pod \"goldmane-666569f655-qnsdl\" (UID: \"19e663c5-ada4-41f4-b329-6d803ea3d32d\") " pod="calico-system/goldmane-666569f655-qnsdl" Nov 8 00:06:20.750270 kubelet[3406]: I1108 00:06:20.748647 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-backend-key-pair\") pod \"whisker-8d85fcbf5-tlmjx\" (UID: \"68944657-8bd2-4013-b4a4-b0605b236b8d\") " pod="calico-system/whisker-8d85fcbf5-tlmjx" Nov 8 00:06:20.750270 kubelet[3406]: I1108 00:06:20.748690 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fj5z\" (UniqueName: \"kubernetes.io/projected/e2b4786b-bdcd-41e2-8651-d03da4e624c0-kube-api-access-5fj5z\") pod \"calico-apiserver-bc8bf555f-2vp5h\" (UID: \"e2b4786b-bdcd-41e2-8651-d03da4e624c0\") " pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" Nov 8 00:06:20.750270 kubelet[3406]: I1108 00:06:20.748737 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6eae301-8fc0-4763-acc1-9e144d4c979d-config-volume\") pod \"coredns-668d6bf9bc-z6wmz\" (UID: \"b6eae301-8fc0-4763-acc1-9e144d4c979d\") " pod="kube-system/coredns-668d6bf9bc-z6wmz" Nov 8 00:06:20.750270 kubelet[3406]: I1108 00:06:20.748777 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2b4786b-bdcd-41e2-8651-d03da4e624c0-calico-apiserver-certs\") pod \"calico-apiserver-bc8bf555f-2vp5h\" (UID: \"e2b4786b-bdcd-41e2-8651-d03da4e624c0\") " pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" Nov 8 00:06:20.752001 kubelet[3406]: I1108 00:06:20.748817 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19e663c5-ada4-41f4-b329-6d803ea3d32d-goldmane-ca-bundle\") pod \"goldmane-666569f655-qnsdl\" (UID: \"19e663c5-ada4-41f4-b329-6d803ea3d32d\") " pod="calico-system/goldmane-666569f655-qnsdl" Nov 8 00:06:20.752001 kubelet[3406]: I1108 00:06:20.748859 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-ca-bundle\") pod \"whisker-8d85fcbf5-tlmjx\" (UID: \"68944657-8bd2-4013-b4a4-b0605b236b8d\") " pod="calico-system/whisker-8d85fcbf5-tlmjx" Nov 8 00:06:20.752001 kubelet[3406]: I1108 00:06:20.748895 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtr6c\" (UniqueName: \"kubernetes.io/projected/68944657-8bd2-4013-b4a4-b0605b236b8d-kube-api-access-xtr6c\") pod \"whisker-8d85fcbf5-tlmjx\" (UID: \"68944657-8bd2-4013-b4a4-b0605b236b8d\") " pod="calico-system/whisker-8d85fcbf5-tlmjx" Nov 8 00:06:20.752001 kubelet[3406]: I1108 00:06:20.748936 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zr5k\" (UniqueName: \"kubernetes.io/projected/b6eae301-8fc0-4763-acc1-9e144d4c979d-kube-api-access-5zr5k\") pod \"coredns-668d6bf9bc-z6wmz\" (UID: \"b6eae301-8fc0-4763-acc1-9e144d4c979d\") " pod="kube-system/coredns-668d6bf9bc-z6wmz" Nov 8 00:06:20.752001 kubelet[3406]: I1108 00:06:20.748971 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a53e75e-3508-43e1-9046-febaec8a3194-config-volume\") pod \"coredns-668d6bf9bc-dl6ch\" (UID: \"6a53e75e-3508-43e1-9046-febaec8a3194\") " pod="kube-system/coredns-668d6bf9bc-dl6ch" Nov 8 00:06:20.752540 kubelet[3406]: I1108 00:06:20.749007 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpc96\" (UniqueName: \"kubernetes.io/projected/6a53e75e-3508-43e1-9046-febaec8a3194-kube-api-access-vpc96\") pod \"coredns-668d6bf9bc-dl6ch\" (UID: \"6a53e75e-3508-43e1-9046-febaec8a3194\") " pod="kube-system/coredns-668d6bf9bc-dl6ch" Nov 8 00:06:20.752540 kubelet[3406]: I1108 00:06:20.749044 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twthk\" (UniqueName: \"kubernetes.io/projected/a118c8b1-dc8a-49b1-956e-fabb0c90510f-kube-api-access-twthk\") pod \"calico-kube-controllers-7cd4d69d7c-ptmh4\" (UID: \"a118c8b1-dc8a-49b1-956e-fabb0c90510f\") " pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" Nov 8 00:06:20.752540 kubelet[3406]: I1108 00:06:20.749087 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5611c66d-4585-41a1-9c50-eb23da03916c-calico-apiserver-certs\") pod \"calico-apiserver-bc8bf555f-bhc54\" (UID: \"5611c66d-4585-41a1-9c50-eb23da03916c\") " pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" Nov 8 00:06:20.752540 kubelet[3406]: I1108 00:06:20.749140 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e663c5-ada4-41f4-b329-6d803ea3d32d-config\") pod \"goldmane-666569f655-qnsdl\" (UID: \"19e663c5-ada4-41f4-b329-6d803ea3d32d\") " pod="calico-system/goldmane-666569f655-qnsdl" Nov 8 00:06:20.752540 kubelet[3406]: I1108 00:06:20.749176 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/19e663c5-ada4-41f4-b329-6d803ea3d32d-goldmane-key-pair\") pod \"goldmane-666569f655-qnsdl\" (UID: \"19e663c5-ada4-41f4-b329-6d803ea3d32d\") " pod="calico-system/goldmane-666569f655-qnsdl" Nov 8 00:06:20.753037 kubelet[3406]: I1108 00:06:20.749226 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrpkm\" (UniqueName: \"kubernetes.io/projected/5611c66d-4585-41a1-9c50-eb23da03916c-kube-api-access-xrpkm\") pod \"calico-apiserver-bc8bf555f-bhc54\" (UID: \"5611c66d-4585-41a1-9c50-eb23da03916c\") " pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" Nov 8 00:06:20.753037 kubelet[3406]: I1108 00:06:20.749268 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a118c8b1-dc8a-49b1-956e-fabb0c90510f-tigera-ca-bundle\") pod \"calico-kube-controllers-7cd4d69d7c-ptmh4\" (UID: \"a118c8b1-dc8a-49b1-956e-fabb0c90510f\") " pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" Nov 8 00:06:20.963955 containerd[2134]: time="2025-11-08T00:06:20.963902978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8d85fcbf5-tlmjx,Uid:68944657-8bd2-4013-b4a4-b0605b236b8d,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:20.965804 containerd[2134]: time="2025-11-08T00:06:20.963942578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-bhc54,Uid:5611c66d-4585-41a1-9c50-eb23da03916c,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:06:20.974105 containerd[2134]: time="2025-11-08T00:06:20.974036714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6wmz,Uid:b6eae301-8fc0-4763-acc1-9e144d4c979d,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:20.988237 containerd[2134]: time="2025-11-08T00:06:20.987876195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnsdl,Uid:19e663c5-ada4-41f4-b329-6d803ea3d32d,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:21.086230 containerd[2134]: time="2025-11-08T00:06:21.086076143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw22z,Uid:5962793e-cd47-45ea-84d0-190de5cbdb54,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:21.161038 containerd[2134]: time="2025-11-08T00:06:21.160952339Z" level=info msg="shim disconnected" id=615f24f71846f955a9cb1923ccfc6a4b22198c068c08d117f2718be1dccf8e09 namespace=k8s.io Nov 8 00:06:21.161038 containerd[2134]: time="2025-11-08T00:06:21.161030531Z" level=warning msg="cleaning up after shim disconnected" id=615f24f71846f955a9cb1923ccfc6a4b22198c068c08d117f2718be1dccf8e09 namespace=k8s.io Nov 8 00:06:21.161507 containerd[2134]: time="2025-11-08T00:06:21.161054459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:06:21.275720 containerd[2134]: time="2025-11-08T00:06:21.275640444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd4d69d7c-ptmh4,Uid:a118c8b1-dc8a-49b1-956e-fabb0c90510f,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:21.276294 containerd[2134]: time="2025-11-08T00:06:21.276059976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dl6ch,Uid:6a53e75e-3508-43e1-9046-febaec8a3194,Namespace:kube-system,Attempt:0,}" Nov 8 00:06:21.284293 containerd[2134]: time="2025-11-08T00:06:21.283978668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-2vp5h,Uid:e2b4786b-bdcd-41e2-8651-d03da4e624c0,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:06:21.451098 containerd[2134]: time="2025-11-08T00:06:21.451024693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:06:21.675601 containerd[2134]: time="2025-11-08T00:06:21.673667138Z" level=error msg="Failed to destroy network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.676062 containerd[2134]: time="2025-11-08T00:06:21.675975194Z" level=error msg="encountered an error cleaning up failed sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.676153 containerd[2134]: time="2025-11-08T00:06:21.676071230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6wmz,Uid:b6eae301-8fc0-4763-acc1-9e144d4c979d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.678608 kubelet[3406]: E1108 00:06:21.676336 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.678608 kubelet[3406]: E1108 00:06:21.676436 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z6wmz" Nov 8 00:06:21.678608 kubelet[3406]: E1108 00:06:21.676470 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z6wmz" Nov 8 00:06:21.679390 kubelet[3406]: E1108 00:06:21.679209 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z6wmz_kube-system(b6eae301-8fc0-4763-acc1-9e144d4c979d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z6wmz_kube-system(b6eae301-8fc0-4763-acc1-9e144d4c979d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z6wmz" podUID="b6eae301-8fc0-4763-acc1-9e144d4c979d" Nov 8 00:06:21.684279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0-shm.mount: Deactivated successfully. Nov 8 00:06:21.732241 containerd[2134]: time="2025-11-08T00:06:21.731234798Z" level=error msg="Failed to destroy network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.732241 containerd[2134]: time="2025-11-08T00:06:21.731897114Z" level=error msg="encountered an error cleaning up failed sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.732241 containerd[2134]: time="2025-11-08T00:06:21.731973950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-bhc54,Uid:5611c66d-4585-41a1-9c50-eb23da03916c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.739899 kubelet[3406]: E1108 00:06:21.733768 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.739899 kubelet[3406]: E1108 00:06:21.733850 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" Nov 8 00:06:21.739899 kubelet[3406]: E1108 00:06:21.733891 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" Nov 8 00:06:21.743331 kubelet[3406]: E1108 00:06:21.733970 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8bf555f-bhc54_calico-apiserver(5611c66d-4585-41a1-9c50-eb23da03916c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8bf555f-bhc54_calico-apiserver(5611c66d-4585-41a1-9c50-eb23da03916c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:06:21.741998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d-shm.mount: Deactivated successfully. Nov 8 00:06:21.752935 containerd[2134]: time="2025-11-08T00:06:21.752750390Z" level=error msg="Failed to destroy network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.754066 containerd[2134]: time="2025-11-08T00:06:21.753977894Z" level=error msg="encountered an error cleaning up failed sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.756470 containerd[2134]: time="2025-11-08T00:06:21.754068410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8d85fcbf5-tlmjx,Uid:68944657-8bd2-4013-b4a4-b0605b236b8d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.757517 kubelet[3406]: E1108 00:06:21.755681 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.757517 kubelet[3406]: E1108 00:06:21.755757 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8d85fcbf5-tlmjx" Nov 8 00:06:21.757517 kubelet[3406]: E1108 00:06:21.755789 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8d85fcbf5-tlmjx" Nov 8 00:06:21.759054 kubelet[3406]: E1108 00:06:21.756012 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8d85fcbf5-tlmjx_calico-system(68944657-8bd2-4013-b4a4-b0605b236b8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8d85fcbf5-tlmjx_calico-system(68944657-8bd2-4013-b4a4-b0605b236b8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8d85fcbf5-tlmjx" podUID="68944657-8bd2-4013-b4a4-b0605b236b8d" Nov 8 00:06:21.765617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7-shm.mount: Deactivated successfully. Nov 8 00:06:21.788698 containerd[2134]: time="2025-11-08T00:06:21.788515119Z" level=error msg="Failed to destroy network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.790015 containerd[2134]: time="2025-11-08T00:06:21.789362547Z" level=error msg="encountered an error cleaning up failed sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.794484 containerd[2134]: time="2025-11-08T00:06:21.789450807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw22z,Uid:5962793e-cd47-45ea-84d0-190de5cbdb54,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.800891 kubelet[3406]: E1108 00:06:21.794970 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.800891 kubelet[3406]: E1108 00:06:21.795057 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tw22z" Nov 8 00:06:21.800891 kubelet[3406]: E1108 00:06:21.795091 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tw22z" Nov 8 00:06:21.796185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298-shm.mount: Deactivated successfully. Nov 8 00:06:21.801792 kubelet[3406]: E1108 00:06:21.795167 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:21.802244 containerd[2134]: time="2025-11-08T00:06:21.802189323Z" level=error msg="Failed to destroy network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.803095 containerd[2134]: time="2025-11-08T00:06:21.803039811Z" level=error msg="encountered an error cleaning up failed sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.804471 containerd[2134]: time="2025-11-08T00:06:21.803341311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnsdl,Uid:19e663c5-ada4-41f4-b329-6d803ea3d32d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.805062 kubelet[3406]: E1108 00:06:21.805015 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.805317 kubelet[3406]: E1108 00:06:21.805252 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qnsdl" Nov 8 00:06:21.805758 kubelet[3406]: E1108 00:06:21.805534 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qnsdl" Nov 8 00:06:21.807317 kubelet[3406]: E1108 00:06:21.805911 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qnsdl_calico-system(19e663c5-ada4-41f4-b329-6d803ea3d32d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qnsdl_calico-system(19e663c5-ada4-41f4-b329-6d803ea3d32d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:06:21.870174 containerd[2134]: time="2025-11-08T00:06:21.870088923Z" level=error msg="Failed to destroy network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.870906 containerd[2134]: time="2025-11-08T00:06:21.870725391Z" level=error msg="encountered an error cleaning up failed sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.870906 containerd[2134]: time="2025-11-08T00:06:21.870830787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd4d69d7c-ptmh4,Uid:a118c8b1-dc8a-49b1-956e-fabb0c90510f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.871240 kubelet[3406]: E1108 00:06:21.871101 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.871240 kubelet[3406]: E1108 00:06:21.871176 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" Nov 8 00:06:21.871240 kubelet[3406]: E1108 00:06:21.871219 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" Nov 8 00:06:21.871436 kubelet[3406]: E1108 00:06:21.871279 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cd4d69d7c-ptmh4_calico-system(a118c8b1-dc8a-49b1-956e-fabb0c90510f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cd4d69d7c-ptmh4_calico-system(a118c8b1-dc8a-49b1-956e-fabb0c90510f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:06:21.873758 kubelet[3406]: E1108 00:06:21.873028 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.873758 kubelet[3406]: E1108 00:06:21.873100 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" Nov 8 00:06:21.873758 kubelet[3406]: E1108 00:06:21.873137 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" Nov 8 00:06:21.874269 containerd[2134]: time="2025-11-08T00:06:21.872047191Z" level=error msg="Failed to destroy network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.874269 containerd[2134]: time="2025-11-08T00:06:21.872637711Z" level=error msg="encountered an error cleaning up failed sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.874269 containerd[2134]: time="2025-11-08T00:06:21.872721267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-2vp5h,Uid:e2b4786b-bdcd-41e2-8651-d03da4e624c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.874697 kubelet[3406]: E1108 00:06:21.873206 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bc8bf555f-2vp5h_calico-apiserver(e2b4786b-bdcd-41e2-8651-d03da4e624c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bc8bf555f-2vp5h_calico-apiserver(e2b4786b-bdcd-41e2-8651-d03da4e624c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:06:21.885473 containerd[2134]: time="2025-11-08T00:06:21.885403395Z" level=error msg="Failed to destroy network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.886059 containerd[2134]: time="2025-11-08T00:06:21.885986535Z" level=error msg="encountered an error cleaning up failed sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.886179 containerd[2134]: time="2025-11-08T00:06:21.886069695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dl6ch,Uid:6a53e75e-3508-43e1-9046-febaec8a3194,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.886506 kubelet[3406]: E1108 00:06:21.886399 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:21.886506 kubelet[3406]: E1108 00:06:21.886483 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dl6ch" Nov 8 00:06:21.886715 kubelet[3406]: E1108 00:06:21.886516 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dl6ch" Nov 8 00:06:21.886715 kubelet[3406]: E1108 00:06:21.886618 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dl6ch_kube-system(6a53e75e-3508-43e1-9046-febaec8a3194)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dl6ch_kube-system(6a53e75e-3508-43e1-9046-febaec8a3194)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dl6ch" podUID="6a53e75e-3508-43e1-9046-febaec8a3194" Nov 8 00:06:22.440933 kubelet[3406]: I1108 00:06:22.440871 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:22.442676 containerd[2134]: time="2025-11-08T00:06:22.442021370Z" level=info msg="StopPodSandbox for \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\"" Nov 8 00:06:22.442676 containerd[2134]: time="2025-11-08T00:06:22.442299686Z" level=info msg="Ensure that sandbox f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f in task-service has been cleanup successfully" Nov 8 00:06:22.448419 kubelet[3406]: I1108 00:06:22.447471 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:06:22.450250 kubelet[3406]: I1108 00:06:22.450215 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:06:22.452867 containerd[2134]: time="2025-11-08T00:06:22.449804426Z" level=info msg="StopPodSandbox for \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\"" Nov 8 00:06:22.454193 containerd[2134]: time="2025-11-08T00:06:22.453606038Z" level=info msg="Ensure that sandbox 05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298 in task-service has been cleanup successfully" Nov 8 00:06:22.454193 containerd[2134]: time="2025-11-08T00:06:22.453985490Z" level=info msg="StopPodSandbox for \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\"" Nov 8 00:06:22.455149 containerd[2134]: time="2025-11-08T00:06:22.455079530Z" level=info msg="Ensure that sandbox 4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea in task-service has been cleanup successfully" Nov 8 00:06:22.462388 kubelet[3406]: I1108 00:06:22.462351 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:22.470165 containerd[2134]: time="2025-11-08T00:06:22.470090750Z" level=info msg="StopPodSandbox for \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\"" Nov 8 00:06:22.470895 containerd[2134]: time="2025-11-08T00:06:22.470379098Z" level=info msg="Ensure that sandbox b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7 in task-service has been cleanup successfully" Nov 8 00:06:22.474523 kubelet[3406]: I1108 00:06:22.474345 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:06:22.477972 containerd[2134]: time="2025-11-08T00:06:22.477199118Z" level=info msg="StopPodSandbox for \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\"" Nov 8 00:06:22.477972 containerd[2134]: time="2025-11-08T00:06:22.477491438Z" level=info msg="Ensure that sandbox 14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366 in task-service has been cleanup successfully" Nov 8 00:06:22.488614 kubelet[3406]: I1108 00:06:22.487762 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:22.489910 containerd[2134]: time="2025-11-08T00:06:22.489845978Z" level=info msg="StopPodSandbox for \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\"" Nov 8 00:06:22.490171 containerd[2134]: time="2025-11-08T00:06:22.490127438Z" level=info msg="Ensure that sandbox 4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0 in task-service has been cleanup successfully" Nov 8 00:06:22.502091 kubelet[3406]: I1108 00:06:22.502047 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:22.504391 containerd[2134]: time="2025-11-08T00:06:22.504313478Z" level=info msg="StopPodSandbox for \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\"" Nov 8 00:06:22.505122 containerd[2134]: time="2025-11-08T00:06:22.505076270Z" level=info msg="Ensure that sandbox 5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0 in task-service has been cleanup successfully" Nov 8 00:06:22.517011 kubelet[3406]: I1108 00:06:22.516971 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:06:22.522853 containerd[2134]: time="2025-11-08T00:06:22.522780818Z" level=info msg="StopPodSandbox for \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\"" Nov 8 00:06:22.523581 containerd[2134]: time="2025-11-08T00:06:22.523117514Z" level=info msg="Ensure that sandbox 2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d in task-service has been cleanup successfully" Nov 8 00:06:22.543079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f-shm.mount: Deactivated successfully. Nov 8 00:06:22.543394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea-shm.mount: Deactivated successfully. Nov 8 00:06:22.543702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0-shm.mount: Deactivated successfully. Nov 8 00:06:22.543936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366-shm.mount: Deactivated successfully. Nov 8 00:06:22.711707 containerd[2134]: time="2025-11-08T00:06:22.711074415Z" level=error msg="StopPodSandbox for \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\" failed" error="failed to destroy network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.711861 kubelet[3406]: E1108 00:06:22.711382 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:06:22.711861 kubelet[3406]: E1108 00:06:22.711467 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea"} Nov 8 00:06:22.713581 kubelet[3406]: E1108 00:06:22.711547 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a118c8b1-dc8a-49b1-956e-fabb0c90510f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.713581 kubelet[3406]: E1108 00:06:22.713036 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a118c8b1-dc8a-49b1-956e-fabb0c90510f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:06:22.717864 containerd[2134]: time="2025-11-08T00:06:22.717074559Z" level=error msg="StopPodSandbox for \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\" failed" error="failed to destroy network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.718037 kubelet[3406]: E1108 00:06:22.717374 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:22.718037 kubelet[3406]: E1108 00:06:22.717706 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f"} Nov 8 00:06:22.718037 kubelet[3406]: E1108 00:06:22.717770 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a53e75e-3508-43e1-9046-febaec8a3194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.718037 kubelet[3406]: E1108 00:06:22.717811 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a53e75e-3508-43e1-9046-febaec8a3194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dl6ch" podUID="6a53e75e-3508-43e1-9046-febaec8a3194" Nov 8 00:06:22.729271 containerd[2134]: time="2025-11-08T00:06:22.728693871Z" level=error msg="StopPodSandbox for \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\" failed" error="failed to destroy network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.729431 kubelet[3406]: E1108 00:06:22.729025 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:06:22.729431 kubelet[3406]: E1108 00:06:22.729099 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366"} Nov 8 00:06:22.729431 kubelet[3406]: E1108 00:06:22.729154 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19e663c5-ada4-41f4-b329-6d803ea3d32d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.729431 kubelet[3406]: E1108 00:06:22.729199 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19e663c5-ada4-41f4-b329-6d803ea3d32d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:06:22.741930 containerd[2134]: time="2025-11-08T00:06:22.740081031Z" level=error msg="StopPodSandbox for \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\" failed" error="failed to destroy network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.742087 kubelet[3406]: E1108 00:06:22.740637 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:06:22.742087 kubelet[3406]: E1108 00:06:22.740709 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d"} Nov 8 00:06:22.742087 kubelet[3406]: E1108 00:06:22.740771 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5611c66d-4585-41a1-9c50-eb23da03916c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.742087 kubelet[3406]: E1108 00:06:22.740810 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5611c66d-4585-41a1-9c50-eb23da03916c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:06:22.753468 containerd[2134]: time="2025-11-08T00:06:22.753386595Z" level=error msg="StopPodSandbox for \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\" failed" error="failed to destroy network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.754620 kubelet[3406]: E1108 00:06:22.754503 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:06:22.755077 containerd[2134]: time="2025-11-08T00:06:22.754923399Z" level=error msg="StopPodSandbox for \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\" failed" error="failed to destroy network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.755466 kubelet[3406]: E1108 00:06:22.755429 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298"} Nov 8 00:06:22.756954 kubelet[3406]: E1108 00:06:22.756690 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5962793e-cd47-45ea-84d0-190de5cbdb54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.756954 kubelet[3406]: E1108 00:06:22.756754 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5962793e-cd47-45ea-84d0-190de5cbdb54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:22.756954 kubelet[3406]: E1108 00:06:22.755362 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:22.756954 kubelet[3406]: E1108 00:06:22.756823 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7"} Nov 8 00:06:22.757395 kubelet[3406]: E1108 00:06:22.756871 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68944657-8bd2-4013-b4a4-b0605b236b8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.757395 kubelet[3406]: E1108 00:06:22.756903 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68944657-8bd2-4013-b4a4-b0605b236b8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8d85fcbf5-tlmjx" podUID="68944657-8bd2-4013-b4a4-b0605b236b8d" Nov 8 00:06:22.763843 containerd[2134]: time="2025-11-08T00:06:22.763770339Z" level=error msg="StopPodSandbox for \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\" failed" error="failed to destroy network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.764502 kubelet[3406]: E1108 00:06:22.764267 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:22.764502 kubelet[3406]: E1108 00:06:22.764338 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0"} Nov 8 00:06:22.764502 kubelet[3406]: E1108 00:06:22.764391 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6eae301-8fc0-4763-acc1-9e144d4c979d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.764502 kubelet[3406]: E1108 00:06:22.764439 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6eae301-8fc0-4763-acc1-9e144d4c979d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z6wmz" podUID="b6eae301-8fc0-4763-acc1-9e144d4c979d" Nov 8 00:06:22.766183 containerd[2134]: time="2025-11-08T00:06:22.766113723Z" level=error msg="StopPodSandbox for \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\" failed" error="failed to destroy network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:06:22.766731 kubelet[3406]: E1108 00:06:22.766413 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:22.766731 kubelet[3406]: E1108 00:06:22.766482 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0"} Nov 8 00:06:22.767276 kubelet[3406]: E1108 00:06:22.766537 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2b4786b-bdcd-41e2-8651-d03da4e624c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:06:22.767276 kubelet[3406]: E1108 00:06:22.767185 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2b4786b-bdcd-41e2-8651-d03da4e624c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:06:27.723335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783407722.mount: Deactivated successfully. Nov 8 00:06:27.775048 containerd[2134]: time="2025-11-08T00:06:27.774970856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:27.777237 containerd[2134]: time="2025-11-08T00:06:27.777180140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 8 00:06:27.778751 containerd[2134]: time="2025-11-08T00:06:27.778691132Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:27.783202 containerd[2134]: time="2025-11-08T00:06:27.783125744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:06:27.784685 containerd[2134]: time="2025-11-08T00:06:27.784614860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.333514711s" Nov 8 00:06:27.784685 containerd[2134]: time="2025-11-08T00:06:27.784680092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 8 00:06:27.828504 containerd[2134]: time="2025-11-08T00:06:27.828436737Z" level=info msg="CreateContainer within sandbox \"2847a2f8347481018f5ed6ddba3be185e94342f904d6f9e59d811ccacf08808b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:06:27.853986 containerd[2134]: time="2025-11-08T00:06:27.853906113Z" level=info msg="CreateContainer within sandbox \"2847a2f8347481018f5ed6ddba3be185e94342f904d6f9e59d811ccacf08808b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0cc79d0c2417b00fecddbce3f550edfd7ebd043b293dffaba1a9d541f2612eb\"" Nov 8 00:06:27.859807 containerd[2134]: time="2025-11-08T00:06:27.856504077Z" level=info msg="StartContainer for \"e0cc79d0c2417b00fecddbce3f550edfd7ebd043b293dffaba1a9d541f2612eb\"" Nov 8 00:06:27.972839 containerd[2134]: time="2025-11-08T00:06:27.972729225Z" level=info msg="StartContainer for \"e0cc79d0c2417b00fecddbce3f550edfd7ebd043b293dffaba1a9d541f2612eb\" returns successfully" Nov 8 00:06:28.504959 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:06:28.505147 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:06:28.769873 kubelet[3406]: I1108 00:06:28.767798 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wkt44" podStartSLOduration=2.670575041 podStartE2EDuration="18.767767461s" podCreationTimestamp="2025-11-08 00:06:10 +0000 UTC" firstStartedPulling="2025-11-08 00:06:11.689924008 +0000 UTC m=+33.815789569" lastFinishedPulling="2025-11-08 00:06:27.787116416 +0000 UTC m=+49.912981989" observedRunningTime="2025-11-08 00:06:28.589629524 +0000 UTC m=+50.715495181" watchObservedRunningTime="2025-11-08 00:06:28.767767461 +0000 UTC m=+50.893633034" Nov 8 00:06:28.778206 containerd[2134]: time="2025-11-08T00:06:28.777247989Z" level=info msg="StopPodSandbox for \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\"" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.043 [INFO][4773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.047 [INFO][4773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" iface="eth0" netns="/var/run/netns/cni-a7d4902c-3d24-0f49-ca1d-388b15c83837" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.048 [INFO][4773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" iface="eth0" netns="/var/run/netns/cni-a7d4902c-3d24-0f49-ca1d-388b15c83837" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.051 [INFO][4773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" iface="eth0" netns="/var/run/netns/cni-a7d4902c-3d24-0f49-ca1d-388b15c83837" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.051 [INFO][4773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.051 [INFO][4773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.291 [INFO][4786] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.292 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.292 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.321 [WARNING][4786] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.321 [INFO][4786] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.325 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:29.347250 containerd[2134]: 2025-11-08 00:06:29.338 [INFO][4773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:29.353158 containerd[2134]: time="2025-11-08T00:06:29.352886312Z" level=info msg="TearDown network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\" successfully" Nov 8 00:06:29.353158 containerd[2134]: time="2025-11-08T00:06:29.352953680Z" level=info msg="StopPodSandbox for \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\" returns successfully" Nov 8 00:06:29.366524 systemd[1]: run-netns-cni\x2da7d4902c\x2d3d24\x2d0f49\x2dca1d\x2d388b15c83837.mount: Deactivated successfully. Nov 8 00:06:29.425311 kubelet[3406]: I1108 00:06:29.423730 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-backend-key-pair\") pod \"68944657-8bd2-4013-b4a4-b0605b236b8d\" (UID: \"68944657-8bd2-4013-b4a4-b0605b236b8d\") " Nov 8 00:06:29.425311 kubelet[3406]: I1108 00:06:29.423845 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-ca-bundle\") pod \"68944657-8bd2-4013-b4a4-b0605b236b8d\" (UID: \"68944657-8bd2-4013-b4a4-b0605b236b8d\") " Nov 8 00:06:29.425311 kubelet[3406]: I1108 00:06:29.424736 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtr6c\" (UniqueName: \"kubernetes.io/projected/68944657-8bd2-4013-b4a4-b0605b236b8d-kube-api-access-xtr6c\") pod \"68944657-8bd2-4013-b4a4-b0605b236b8d\" (UID: \"68944657-8bd2-4013-b4a4-b0605b236b8d\") " Nov 8 00:06:29.436973 kubelet[3406]: I1108 00:06:29.435602 3406 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "68944657-8bd2-4013-b4a4-b0605b236b8d" (UID: "68944657-8bd2-4013-b4a4-b0605b236b8d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:06:29.445031 kubelet[3406]: I1108 00:06:29.444370 3406 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "68944657-8bd2-4013-b4a4-b0605b236b8d" (UID: "68944657-8bd2-4013-b4a4-b0605b236b8d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:06:29.445919 systemd[1]: var-lib-kubelet-pods-68944657\x2d8bd2\x2d4013\x2db4a4\x2db0605b236b8d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:06:29.447511 kubelet[3406]: I1108 00:06:29.446154 3406 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68944657-8bd2-4013-b4a4-b0605b236b8d-kube-api-access-xtr6c" (OuterVolumeSpecName: "kube-api-access-xtr6c") pod "68944657-8bd2-4013-b4a4-b0605b236b8d" (UID: "68944657-8bd2-4013-b4a4-b0605b236b8d"). InnerVolumeSpecName "kube-api-access-xtr6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:06:29.462307 systemd[1]: var-lib-kubelet-pods-68944657\x2d8bd2\x2d4013\x2db4a4\x2db0605b236b8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxtr6c.mount: Deactivated successfully. Nov 8 00:06:29.527589 kubelet[3406]: I1108 00:06:29.525761 3406 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-backend-key-pair\") on node \"ip-172-31-28-187\" DevicePath \"\"" Nov 8 00:06:29.527589 kubelet[3406]: I1108 00:06:29.525812 3406 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68944657-8bd2-4013-b4a4-b0605b236b8d-whisker-ca-bundle\") on node \"ip-172-31-28-187\" DevicePath \"\"" Nov 8 00:06:29.527589 kubelet[3406]: I1108 00:06:29.525837 3406 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xtr6c\" (UniqueName: \"kubernetes.io/projected/68944657-8bd2-4013-b4a4-b0605b236b8d-kube-api-access-xtr6c\") on node \"ip-172-31-28-187\" DevicePath \"\"" Nov 8 00:06:29.687182 kubelet[3406]: I1108 00:06:29.687031 3406 status_manager.go:890] "Failed to get status for pod" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" pod="calico-system/whisker-8465bf669f-f6zwz" err="pods \"whisker-8465bf669f-f6zwz\" is forbidden: User \"system:node:ip-172-31-28-187\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-187' and this object" Nov 8 00:06:29.728790 kubelet[3406]: I1108 00:06:29.728366 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzh5w\" (UniqueName: \"kubernetes.io/projected/b7a04fa0-10c9-4b7a-b022-1e4b716cfc44-kube-api-access-dzh5w\") pod \"whisker-8465bf669f-f6zwz\" (UID: \"b7a04fa0-10c9-4b7a-b022-1e4b716cfc44\") " pod="calico-system/whisker-8465bf669f-f6zwz" Nov 8 00:06:29.728790 kubelet[3406]: I1108 00:06:29.728482 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7a04fa0-10c9-4b7a-b022-1e4b716cfc44-whisker-ca-bundle\") pod \"whisker-8465bf669f-f6zwz\" (UID: \"b7a04fa0-10c9-4b7a-b022-1e4b716cfc44\") " pod="calico-system/whisker-8465bf669f-f6zwz" Nov 8 00:06:29.728790 kubelet[3406]: I1108 00:06:29.728539 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7a04fa0-10c9-4b7a-b022-1e4b716cfc44-whisker-backend-key-pair\") pod \"whisker-8465bf669f-f6zwz\" (UID: \"b7a04fa0-10c9-4b7a-b022-1e4b716cfc44\") " pod="calico-system/whisker-8465bf669f-f6zwz" Nov 8 00:06:30.012154 containerd[2134]: time="2025-11-08T00:06:30.011432599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8465bf669f-f6zwz,Uid:b7a04fa0-10c9-4b7a-b022-1e4b716cfc44,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:30.087408 kubelet[3406]: I1108 00:06:30.087255 3406 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68944657-8bd2-4013-b4a4-b0605b236b8d" path="/var/lib/kubelet/pods/68944657-8bd2-4013-b4a4-b0605b236b8d/volumes" Nov 8 00:06:30.274183 (udev-worker)[4738]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:06:30.278356 systemd-networkd[1688]: cali9b73900fe8c: Link UP Nov 8 00:06:30.281029 systemd-networkd[1688]: cali9b73900fe8c: Gained carrier Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.095 [INFO][4833] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.129 [INFO][4833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0 whisker-8465bf669f- calico-system b7a04fa0-10c9-4b7a-b022-1e4b716cfc44 938 0 2025-11-08 00:06:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8465bf669f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-187 whisker-8465bf669f-f6zwz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9b73900fe8c [] [] }} ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.129 [INFO][4833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.187 [INFO][4845] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" HandleID="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Workload="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.189 [INFO][4845] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" HandleID="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Workload="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa200), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-187", "pod":"whisker-8465bf669f-f6zwz", "timestamp":"2025-11-08 00:06:30.187459712 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.189 [INFO][4845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.190 [INFO][4845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.191 [INFO][4845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.212 [INFO][4845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.221 [INFO][4845] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.229 [INFO][4845] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.233 [INFO][4845] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.237 [INFO][4845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.237 [INFO][4845] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.240 [INFO][4845] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76 Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.247 [INFO][4845] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.258 [INFO][4845] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.65/26] block=192.168.96.64/26 handle="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.259 [INFO][4845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.65/26] handle="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" host="ip-172-31-28-187" Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.259 [INFO][4845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:30.313135 containerd[2134]: 2025-11-08 00:06:30.259 [INFO][4845] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.65/26] IPv6=[] ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" HandleID="k8s-pod-network.f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Workload="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" Nov 8 00:06:30.315093 containerd[2134]: 2025-11-08 00:06:30.263 [INFO][4833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0", GenerateName:"whisker-8465bf669f-", Namespace:"calico-system", SelfLink:"", UID:"b7a04fa0-10c9-4b7a-b022-1e4b716cfc44", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8465bf669f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"whisker-8465bf669f-f6zwz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9b73900fe8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:30.315093 containerd[2134]: 2025-11-08 00:06:30.263 [INFO][4833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.65/32] ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" Nov 8 00:06:30.315093 containerd[2134]: 2025-11-08 00:06:30.263 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b73900fe8c ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" Nov 8 00:06:30.315093 containerd[2134]: 2025-11-08 00:06:30.282 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" Nov 8 00:06:30.315093 containerd[2134]: 2025-11-08 00:06:30.283 [INFO][4833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0", GenerateName:"whisker-8465bf669f-", Namespace:"calico-system", SelfLink:"", UID:"b7a04fa0-10c9-4b7a-b022-1e4b716cfc44", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8465bf669f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76", Pod:"whisker-8465bf669f-f6zwz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9b73900fe8c", MAC:"b6:fe:54:99:97:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:30.315093 containerd[2134]: 2025-11-08 00:06:30.305 [INFO][4833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76" Namespace="calico-system" Pod="whisker-8465bf669f-f6zwz" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8465bf669f--f6zwz-eth0" Nov 8 00:06:30.377937 containerd[2134]: time="2025-11-08T00:06:30.377305641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:30.377937 containerd[2134]: time="2025-11-08T00:06:30.377605845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:30.377937 containerd[2134]: time="2025-11-08T00:06:30.377658465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:30.379381 containerd[2134]: time="2025-11-08T00:06:30.378949161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:30.557861 containerd[2134]: time="2025-11-08T00:06:30.555782314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8465bf669f-f6zwz,Uid:b7a04fa0-10c9-4b7a-b022-1e4b716cfc44,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7e6d2716bade3da710adeb44009aaee4aec640604ede9737323a117098e5c76\"" Nov 8 00:06:30.565595 containerd[2134]: time="2025-11-08T00:06:30.565438774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:06:30.868176 containerd[2134]: time="2025-11-08T00:06:30.867409680Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:30.873461 containerd[2134]: time="2025-11-08T00:06:30.872371680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:06:30.873461 containerd[2134]: time="2025-11-08T00:06:30.872602056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:06:30.876773 kubelet[3406]: E1108 00:06:30.874517 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:30.876773 kubelet[3406]: E1108 00:06:30.876685 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:30.887630 kubelet[3406]: E1108 00:06:30.886334 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad073e8bb50749a3ae91e94ed2b29ac5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:30.898638 containerd[2134]: time="2025-11-08T00:06:30.896126760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:06:31.223801 containerd[2134]: time="2025-11-08T00:06:31.222731001Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:31.226071 containerd[2134]: time="2025-11-08T00:06:31.225479349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:06:31.226255 containerd[2134]: time="2025-11-08T00:06:31.225994389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:31.227625 kubelet[3406]: E1108 00:06:31.226812 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:31.227625 kubelet[3406]: E1108 00:06:31.226892 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:31.228330 kubelet[3406]: E1108 00:06:31.227060 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:31.228529 kubelet[3406]: E1108 00:06:31.228321 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:06:31.566730 kubelet[3406]: E1108 00:06:31.566423 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:06:31.568194 systemd-networkd[1688]: cali9b73900fe8c: Gained IPv6LL Nov 8 00:06:31.588656 kernel: bpftool[5020]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:06:31.991896 systemd-networkd[1688]: vxlan.calico: Link UP Nov 8 00:06:31.991912 systemd-networkd[1688]: vxlan.calico: Gained carrier Nov 8 00:06:32.053052 (udev-worker)[4739]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:06:32.590962 kubelet[3406]: E1108 00:06:32.590816 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:06:33.081713 containerd[2134]: time="2025-11-08T00:06:33.081644267Z" level=info msg="StopPodSandbox for \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\"" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.166 [INFO][5106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.167 [INFO][5106] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" iface="eth0" netns="/var/run/netns/cni-3e48c9f0-7ccd-496b-22a4-447a5462d05b" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.167 [INFO][5106] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" iface="eth0" netns="/var/run/netns/cni-3e48c9f0-7ccd-496b-22a4-447a5462d05b" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.167 [INFO][5106] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" iface="eth0" netns="/var/run/netns/cni-3e48c9f0-7ccd-496b-22a4-447a5462d05b" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.168 [INFO][5106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.168 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.230 [INFO][5113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.230 [INFO][5113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.231 [INFO][5113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.243 [WARNING][5113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.243 [INFO][5113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.246 [INFO][5113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:33.254587 containerd[2134]: 2025-11-08 00:06:33.249 [INFO][5106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:33.256023 containerd[2134]: time="2025-11-08T00:06:33.255492588Z" level=info msg="TearDown network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\" successfully" Nov 8 00:06:33.256023 containerd[2134]: time="2025-11-08T00:06:33.255541212Z" level=info msg="StopPodSandbox for \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\" returns successfully" Nov 8 00:06:33.257735 containerd[2134]: time="2025-11-08T00:06:33.257640504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dl6ch,Uid:6a53e75e-3508-43e1-9046-febaec8a3194,Namespace:kube-system,Attempt:1,}" Nov 8 00:06:33.260118 systemd[1]: run-netns-cni\x2d3e48c9f0\x2d7ccd\x2d496b\x2d22a4\x2d447a5462d05b.mount: Deactivated successfully. Nov 8 00:06:33.508815 systemd-networkd[1688]: cali19953ff7d94: Link UP Nov 8 00:06:33.510698 systemd-networkd[1688]: cali19953ff7d94: Gained carrier Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.374 [INFO][5120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0 coredns-668d6bf9bc- kube-system 6a53e75e-3508-43e1-9046-febaec8a3194 975 0 2025-11-08 00:05:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-187 coredns-668d6bf9bc-dl6ch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali19953ff7d94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.374 [INFO][5120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.426 [INFO][5132] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" HandleID="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.426 [INFO][5132] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" HandleID="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-187", "pod":"coredns-668d6bf9bc-dl6ch", "timestamp":"2025-11-08 00:06:33.426233688 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.426 [INFO][5132] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.426 [INFO][5132] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.426 [INFO][5132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.442 [INFO][5132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.452 [INFO][5132] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.462 [INFO][5132] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.465 [INFO][5132] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.469 [INFO][5132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.470 [INFO][5132] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.473 [INFO][5132] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65 Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.483 [INFO][5132] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.496 [INFO][5132] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.66/26] block=192.168.96.64/26 handle="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.496 [INFO][5132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.66/26] handle="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" host="ip-172-31-28-187" Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.496 [INFO][5132] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:33.546397 containerd[2134]: 2025-11-08 00:06:33.497 [INFO][5132] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.66/26] IPv6=[] ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" HandleID="k8s-pod-network.ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.549195 containerd[2134]: 2025-11-08 00:06:33.501 [INFO][5120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a53e75e-3508-43e1-9046-febaec8a3194", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"coredns-668d6bf9bc-dl6ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19953ff7d94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:33.549195 containerd[2134]: 2025-11-08 00:06:33.502 [INFO][5120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.66/32] ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.549195 containerd[2134]: 2025-11-08 00:06:33.503 [INFO][5120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19953ff7d94 ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.549195 containerd[2134]: 2025-11-08 00:06:33.509 [INFO][5120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.549195 containerd[2134]: 2025-11-08 00:06:33.512 [INFO][5120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a53e75e-3508-43e1-9046-febaec8a3194", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65", Pod:"coredns-668d6bf9bc-dl6ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19953ff7d94", MAC:"e6:16:9a:9a:f9:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:33.549195 containerd[2134]: 2025-11-08 00:06:33.538 [INFO][5120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65" Namespace="kube-system" Pod="coredns-668d6bf9bc-dl6ch" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:33.550877 systemd-networkd[1688]: vxlan.calico: Gained IPv6LL Nov 8 00:06:33.610572 containerd[2134]: time="2025-11-08T00:06:33.610040653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:33.611512 containerd[2134]: time="2025-11-08T00:06:33.611078737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:33.611512 containerd[2134]: time="2025-11-08T00:06:33.611193625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:33.611512 containerd[2134]: time="2025-11-08T00:06:33.611401801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:33.723887 containerd[2134]: time="2025-11-08T00:06:33.723832526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dl6ch,Uid:6a53e75e-3508-43e1-9046-febaec8a3194,Namespace:kube-system,Attempt:1,} returns sandbox id \"ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65\"" Nov 8 00:06:33.732988 containerd[2134]: time="2025-11-08T00:06:33.732887462Z" level=info msg="CreateContainer within sandbox \"ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:06:33.761610 containerd[2134]: time="2025-11-08T00:06:33.761271710Z" level=info msg="CreateContainer within sandbox \"ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c99a7c795f8567286500ad04adb1c3cc493d57b69e0fe58f6542aa8a6b010bb8\"" Nov 8 00:06:33.762833 containerd[2134]: time="2025-11-08T00:06:33.762639734Z" level=info msg="StartContainer for \"c99a7c795f8567286500ad04adb1c3cc493d57b69e0fe58f6542aa8a6b010bb8\"" Nov 8 00:06:33.932103 containerd[2134]: time="2025-11-08T00:06:33.931082655Z" level=info msg="StartContainer for \"c99a7c795f8567286500ad04adb1c3cc493d57b69e0fe58f6542aa8a6b010bb8\" returns successfully" Nov 8 00:06:34.084826 containerd[2134]: time="2025-11-08T00:06:34.082840236Z" level=info msg="StopPodSandbox for \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\"" Nov 8 00:06:34.269368 systemd[1]: run-containerd-runc-k8s.io-ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65-runc.TKU1Ar.mount: Deactivated successfully. Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.301 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.302 [INFO][5235] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" iface="eth0" netns="/var/run/netns/cni-a14cddd7-a9d0-740c-dcba-4cdb5d459f26" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.304 [INFO][5235] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" iface="eth0" netns="/var/run/netns/cni-a14cddd7-a9d0-740c-dcba-4cdb5d459f26" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.305 [INFO][5235] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" iface="eth0" netns="/var/run/netns/cni-a14cddd7-a9d0-740c-dcba-4cdb5d459f26" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.306 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.306 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.371 [INFO][5244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.371 [INFO][5244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.371 [INFO][5244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.393 [WARNING][5244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.394 [INFO][5244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.397 [INFO][5244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:34.408058 containerd[2134]: 2025-11-08 00:06:34.402 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:34.411536 containerd[2134]: time="2025-11-08T00:06:34.411462109Z" level=info msg="TearDown network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\" successfully" Nov 8 00:06:34.411536 containerd[2134]: time="2025-11-08T00:06:34.411526597Z" level=info msg="StopPodSandbox for \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\" returns successfully" Nov 8 00:06:34.413926 containerd[2134]: time="2025-11-08T00:06:34.413861485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-2vp5h,Uid:e2b4786b-bdcd-41e2-8651-d03da4e624c0,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:06:34.416258 systemd[1]: run-netns-cni\x2da14cddd7\x2da9d0\x2d740c\x2ddcba\x2d4cdb5d459f26.mount: Deactivated successfully. Nov 8 00:06:34.639949 systemd-networkd[1688]: cali19953ff7d94: Gained IPv6LL Nov 8 00:06:34.679218 kubelet[3406]: I1108 00:06:34.672616 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dl6ch" podStartSLOduration=52.672543699 podStartE2EDuration="52.672543699s" podCreationTimestamp="2025-11-08 00:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:34.63633569 +0000 UTC m=+56.762201299" watchObservedRunningTime="2025-11-08 00:06:34.672543699 +0000 UTC m=+56.798409272" Nov 8 00:06:34.790138 systemd-networkd[1688]: cali3cda410b653: Link UP Nov 8 00:06:34.798712 systemd-networkd[1688]: cali3cda410b653: Gained carrier Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.560 [INFO][5251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0 calico-apiserver-bc8bf555f- calico-apiserver e2b4786b-bdcd-41e2-8651-d03da4e624c0 985 0 2025-11-08 00:05:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc8bf555f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-187 calico-apiserver-bc8bf555f-2vp5h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3cda410b653 [] [] }} ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.560 [INFO][5251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.658 [INFO][5263] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" HandleID="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.660 [INFO][5263] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" HandleID="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030b0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-187", "pod":"calico-apiserver-bc8bf555f-2vp5h", "timestamp":"2025-11-08 00:06:34.658868678 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.661 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.661 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.661 [INFO][5263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.704 [INFO][5263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.724 [INFO][5263] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.733 [INFO][5263] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.738 [INFO][5263] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.743 [INFO][5263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.743 [INFO][5263] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.747 [INFO][5263] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246 Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.757 [INFO][5263] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.772 [INFO][5263] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.67/26] block=192.168.96.64/26 handle="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.773 [INFO][5263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.67/26] handle="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" host="ip-172-31-28-187" Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.773 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:34.833448 containerd[2134]: 2025-11-08 00:06:34.773 [INFO][5263] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.67/26] IPv6=[] ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" HandleID="k8s-pod-network.0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.834671 containerd[2134]: 2025-11-08 00:06:34.784 [INFO][5251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2b4786b-bdcd-41e2-8651-d03da4e624c0", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"calico-apiserver-bc8bf555f-2vp5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cda410b653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:34.834671 containerd[2134]: 2025-11-08 00:06:34.784 [INFO][5251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.67/32] ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.834671 containerd[2134]: 2025-11-08 00:06:34.784 [INFO][5251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cda410b653 ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.834671 containerd[2134]: 2025-11-08 00:06:34.790 [INFO][5251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.834671 containerd[2134]: 2025-11-08 00:06:34.791 [INFO][5251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2b4786b-bdcd-41e2-8651-d03da4e624c0", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246", Pod:"calico-apiserver-bc8bf555f-2vp5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cda410b653", MAC:"56:b1:c5:0e:3f:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:34.834671 containerd[2134]: 2025-11-08 00:06:34.829 [INFO][5251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-2vp5h" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:34.887125 containerd[2134]: time="2025-11-08T00:06:34.886889452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:34.888937 containerd[2134]: time="2025-11-08T00:06:34.887238808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:34.889447 containerd[2134]: time="2025-11-08T00:06:34.888656536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:34.891827 containerd[2134]: time="2025-11-08T00:06:34.891140500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:35.023262 containerd[2134]: time="2025-11-08T00:06:35.023197476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-2vp5h,Uid:e2b4786b-bdcd-41e2-8651-d03da4e624c0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246\"" Nov 8 00:06:35.026224 containerd[2134]: time="2025-11-08T00:06:35.026010372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:35.302260 containerd[2134]: time="2025-11-08T00:06:35.302079686Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:35.304261 containerd[2134]: time="2025-11-08T00:06:35.304191434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:35.304364 containerd[2134]: time="2025-11-08T00:06:35.304332770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:35.304664 kubelet[3406]: E1108 00:06:35.304547 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:35.305143 kubelet[3406]: E1108 00:06:35.304668 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:35.305143 kubelet[3406]: E1108 00:06:35.304884 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fj5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-2vp5h_calico-apiserver(e2b4786b-bdcd-41e2-8651-d03da4e624c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:35.306746 kubelet[3406]: E1108 00:06:35.306665 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:06:35.610200 kubelet[3406]: E1108 00:06:35.609688 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:06:35.983982 systemd-networkd[1688]: cali3cda410b653: Gained IPv6LL Nov 8 00:06:36.083252 containerd[2134]: time="2025-11-08T00:06:36.082079246Z" level=info msg="StopPodSandbox for \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\"" Nov 8 00:06:36.085611 containerd[2134]: time="2025-11-08T00:06:36.085426562Z" level=info msg="StopPodSandbox for \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\"" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.227 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.228 [INFO][5344] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" iface="eth0" netns="/var/run/netns/cni-34f0cc34-e4dc-e472-f46d-88636caeaf96" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.229 [INFO][5344] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" iface="eth0" netns="/var/run/netns/cni-34f0cc34-e4dc-e472-f46d-88636caeaf96" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.230 [INFO][5344] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" iface="eth0" netns="/var/run/netns/cni-34f0cc34-e4dc-e472-f46d-88636caeaf96" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.230 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.230 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.295 [INFO][5362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.296 [INFO][5362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.296 [INFO][5362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.313 [WARNING][5362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.313 [INFO][5362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.317 [INFO][5362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:36.328394 containerd[2134]: 2025-11-08 00:06:36.321 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:06:36.329899 containerd[2134]: time="2025-11-08T00:06:36.329206959Z" level=info msg="TearDown network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\" successfully" Nov 8 00:06:36.335885 containerd[2134]: time="2025-11-08T00:06:36.331610355Z" level=info msg="StopPodSandbox for \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\" returns successfully" Nov 8 00:06:36.341617 containerd[2134]: time="2025-11-08T00:06:36.339674547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd4d69d7c-ptmh4,Uid:a118c8b1-dc8a-49b1-956e-fabb0c90510f,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:36.344981 systemd[1]: run-netns-cni\x2d34f0cc34\x2de4dc\x2de472\x2df46d\x2d88636caeaf96.mount: Deactivated successfully. Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.214 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.215 [INFO][5343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" iface="eth0" netns="/var/run/netns/cni-51bf042c-4842-a7a7-94c0-36c7f067567f" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.216 [INFO][5343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" iface="eth0" netns="/var/run/netns/cni-51bf042c-4842-a7a7-94c0-36c7f067567f" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.216 [INFO][5343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" iface="eth0" netns="/var/run/netns/cni-51bf042c-4842-a7a7-94c0-36c7f067567f" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.216 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.216 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.297 [INFO][5357] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.304 [INFO][5357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.317 [INFO][5357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.359 [WARNING][5357] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.359 [INFO][5357] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.365 [INFO][5357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:36.374969 containerd[2134]: 2025-11-08 00:06:36.369 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:36.377744 containerd[2134]: time="2025-11-08T00:06:36.375209571Z" level=info msg="TearDown network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\" successfully" Nov 8 00:06:36.378678 containerd[2134]: time="2025-11-08T00:06:36.376649007Z" level=info msg="StopPodSandbox for \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\" returns successfully" Nov 8 00:06:36.381596 containerd[2134]: time="2025-11-08T00:06:36.381236019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6wmz,Uid:b6eae301-8fc0-4763-acc1-9e144d4c979d,Namespace:kube-system,Attempt:1,}" Nov 8 00:06:36.386847 systemd[1]: run-netns-cni\x2d51bf042c\x2d4842\x2da7a7\x2d94c0\x2d36c7f067567f.mount: Deactivated successfully. Nov 8 00:06:36.639942 kubelet[3406]: E1108 00:06:36.639326 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:06:36.920897 systemd-networkd[1688]: cali93a991e2a18: Link UP Nov 8 00:06:36.924496 systemd-networkd[1688]: cali93a991e2a18: Gained carrier Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.611 [INFO][5382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0 coredns-668d6bf9bc- kube-system b6eae301-8fc0-4763-acc1-9e144d4c979d 1012 0 2025-11-08 00:05:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-187 coredns-668d6bf9bc-z6wmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali93a991e2a18 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.613 [INFO][5382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.776 [INFO][5401] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" HandleID="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.779 [INFO][5401] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" HandleID="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000363bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-187", "pod":"coredns-668d6bf9bc-z6wmz", "timestamp":"2025-11-08 00:06:36.776955125 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.780 [INFO][5401] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.780 [INFO][5401] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.784 [INFO][5401] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.803 [INFO][5401] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.814 [INFO][5401] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.824 [INFO][5401] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.829 [INFO][5401] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.834 [INFO][5401] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.835 [INFO][5401] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.845 [INFO][5401] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66 Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.865 [INFO][5401] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.904 [INFO][5401] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.68/26] block=192.168.96.64/26 handle="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.904 [INFO][5401] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.68/26] handle="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" host="ip-172-31-28-187" Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.904 [INFO][5401] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:36.988187 containerd[2134]: 2025-11-08 00:06:36.904 [INFO][5401] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.68/26] IPv6=[] ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" HandleID="k8s-pod-network.60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.996842 containerd[2134]: 2025-11-08 00:06:36.910 [INFO][5382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6eae301-8fc0-4763-acc1-9e144d4c979d", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"coredns-668d6bf9bc-z6wmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93a991e2a18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:36.996842 containerd[2134]: 2025-11-08 00:06:36.910 [INFO][5382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.68/32] ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.996842 containerd[2134]: 2025-11-08 00:06:36.910 [INFO][5382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93a991e2a18 ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.996842 containerd[2134]: 2025-11-08 00:06:36.928 [INFO][5382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:36.996842 containerd[2134]: 2025-11-08 00:06:36.929 [INFO][5382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6eae301-8fc0-4763-acc1-9e144d4c979d", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66", Pod:"coredns-668d6bf9bc-z6wmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93a991e2a18", MAC:"ca:70:37:cd:52:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:36.996842 containerd[2134]: 2025-11-08 00:06:36.965 [INFO][5382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66" Namespace="kube-system" Pod="coredns-668d6bf9bc-z6wmz" WorkloadEndpoint="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:37.030501 systemd[1]: Started sshd@7-172.31.28.187:22-139.178.89.65:59336.service - OpenSSH per-connection server daemon (139.178.89.65:59336). Nov 8 00:06:37.093721 containerd[2134]: time="2025-11-08T00:06:37.092861475Z" level=info msg="StopPodSandbox for \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\"" Nov 8 00:06:37.149901 systemd-networkd[1688]: calicc988add759: Link UP Nov 8 00:06:37.152956 systemd-networkd[1688]: calicc988add759: Gained carrier Nov 8 00:06:37.173620 containerd[2134]: time="2025-11-08T00:06:37.173316903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:37.175375 containerd[2134]: time="2025-11-08T00:06:37.174167679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:37.175375 containerd[2134]: time="2025-11-08T00:06:37.174262863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:37.175375 containerd[2134]: time="2025-11-08T00:06:37.174482379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.580 [INFO][5373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0 calico-kube-controllers-7cd4d69d7c- calico-system a118c8b1-dc8a-49b1-956e-fabb0c90510f 1013 0 2025-11-08 00:06:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cd4d69d7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-187 calico-kube-controllers-7cd4d69d7c-ptmh4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc988add759 [] [] }} ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.581 [INFO][5373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.790 [INFO][5396] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" HandleID="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.792 [INFO][5396] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" HandleID="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001238b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-187", "pod":"calico-kube-controllers-7cd4d69d7c-ptmh4", "timestamp":"2025-11-08 00:06:36.790023101 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.792 [INFO][5396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.905 [INFO][5396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.905 [INFO][5396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:36.955 [INFO][5396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.004 [INFO][5396] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.032 [INFO][5396] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.046 [INFO][5396] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.062 [INFO][5396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.062 [INFO][5396] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.073 [INFO][5396] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6 Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.091 [INFO][5396] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.127 [INFO][5396] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.69/26] block=192.168.96.64/26 handle="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.127 [INFO][5396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.69/26] handle="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" host="ip-172-31-28-187" Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.127 [INFO][5396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:37.235406 containerd[2134]: 2025-11-08 00:06:37.127 [INFO][5396] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.69/26] IPv6=[] ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" HandleID="k8s-pod-network.91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:37.238736 containerd[2134]: 2025-11-08 00:06:37.139 [INFO][5373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0", GenerateName:"calico-kube-controllers-7cd4d69d7c-", Namespace:"calico-system", SelfLink:"", UID:"a118c8b1-dc8a-49b1-956e-fabb0c90510f", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd4d69d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"calico-kube-controllers-7cd4d69d7c-ptmh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc988add759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:37.238736 containerd[2134]: 2025-11-08 00:06:37.140 [INFO][5373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.69/32] ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:37.238736 containerd[2134]: 2025-11-08 00:06:37.141 [INFO][5373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc988add759 ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:37.238736 containerd[2134]: 2025-11-08 00:06:37.153 [INFO][5373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:37.238736 containerd[2134]: 2025-11-08 00:06:37.156 [INFO][5373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0", GenerateName:"calico-kube-controllers-7cd4d69d7c-", Namespace:"calico-system", SelfLink:"", UID:"a118c8b1-dc8a-49b1-956e-fabb0c90510f", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd4d69d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6", Pod:"calico-kube-controllers-7cd4d69d7c-ptmh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc988add759", MAC:"ae:ec:5d:6b:30:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:37.238736 containerd[2134]: 2025-11-08 00:06:37.190 [INFO][5373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6" Namespace="calico-system" Pod="calico-kube-controllers-7cd4d69d7c-ptmh4" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:06:37.354398 sshd[5421]: Accepted publickey for core from 139.178.89.65 port 59336 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:37.363184 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:37.382920 systemd-logind[2107]: New session 8 of user core. Nov 8 00:06:37.391174 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:06:37.401437 containerd[2134]: time="2025-11-08T00:06:37.391666732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:37.401437 containerd[2134]: time="2025-11-08T00:06:37.392229148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:37.401437 containerd[2134]: time="2025-11-08T00:06:37.393717616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:37.411523 containerd[2134]: time="2025-11-08T00:06:37.410642440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:37.563791 containerd[2134]: time="2025-11-08T00:06:37.562077833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z6wmz,Uid:b6eae301-8fc0-4763-acc1-9e144d4c979d,Namespace:kube-system,Attempt:1,} returns sandbox id \"60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66\"" Nov 8 00:06:37.604878 systemd[1]: run-containerd-runc-k8s.io-91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6-runc.DhrDFV.mount: Deactivated successfully. Nov 8 00:06:37.612518 containerd[2134]: time="2025-11-08T00:06:37.609975689Z" level=info msg="CreateContainer within sandbox \"60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:06:37.755800 containerd[2134]: time="2025-11-08T00:06:37.755710878Z" level=info msg="CreateContainer within sandbox \"60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e0392ee764a9892c72b4733f8035275b9d128a9136f7a5db1104d9d7aad6874\"" Nov 8 00:06:37.758464 containerd[2134]: time="2025-11-08T00:06:37.758067702Z" level=info msg="StartContainer for \"3e0392ee764a9892c72b4733f8035275b9d128a9136f7a5db1104d9d7aad6874\"" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.501 [INFO][5447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.504 [INFO][5447] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" iface="eth0" netns="/var/run/netns/cni-04b35940-8f03-e0e2-d78d-54281384df8d" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.506 [INFO][5447] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" iface="eth0" netns="/var/run/netns/cni-04b35940-8f03-e0e2-d78d-54281384df8d" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.507 [INFO][5447] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" iface="eth0" netns="/var/run/netns/cni-04b35940-8f03-e0e2-d78d-54281384df8d" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.507 [INFO][5447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.507 [INFO][5447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.805 [INFO][5515] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.807 [INFO][5515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.809 [INFO][5515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.857 [WARNING][5515] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.857 [INFO][5515] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.865 [INFO][5515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:37.902589 containerd[2134]: 2025-11-08 00:06:37.877 [INFO][5447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:06:37.902589 containerd[2134]: time="2025-11-08T00:06:37.900464263Z" level=info msg="TearDown network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\" successfully" Nov 8 00:06:37.902589 containerd[2134]: time="2025-11-08T00:06:37.900617779Z" level=info msg="StopPodSandbox for \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\" returns successfully" Nov 8 00:06:37.919244 containerd[2134]: time="2025-11-08T00:06:37.915255427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnsdl,Uid:19e663c5-ada4-41f4-b329-6d803ea3d32d,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:38.035231 containerd[2134]: time="2025-11-08T00:06:38.034938927Z" level=info msg="StartContainer for \"3e0392ee764a9892c72b4733f8035275b9d128a9136f7a5db1104d9d7aad6874\" returns successfully" Nov 8 00:06:38.181900 sshd[5421]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:38.189071 containerd[2134]: time="2025-11-08T00:06:38.184065040Z" level=info msg="StopPodSandbox for \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\"" Nov 8 00:06:38.213406 containerd[2134]: time="2025-11-08T00:06:38.212634220Z" level=info msg="StopPodSandbox for \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\"" Nov 8 00:06:38.215106 systemd[1]: sshd@7-172.31.28.187:22-139.178.89.65:59336.service: Deactivated successfully. Nov 8 00:06:38.235866 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:06:38.243181 systemd-logind[2107]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:06:38.253904 systemd-logind[2107]: Removed session 8. Nov 8 00:06:38.295531 containerd[2134]: time="2025-11-08T00:06:38.294871265Z" level=info msg="StopPodSandbox for \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\"" Nov 8 00:06:38.327113 containerd[2134]: time="2025-11-08T00:06:38.326893361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd4d69d7c-ptmh4,Uid:a118c8b1-dc8a-49b1-956e-fabb0c90510f,Namespace:calico-system,Attempt:1,} returns sandbox id \"91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6\"" Nov 8 00:06:38.352632 systemd-networkd[1688]: cali93a991e2a18: Gained IPv6LL Nov 8 00:06:38.378300 systemd[1]: run-netns-cni\x2d04b35940\x2d8f03\x2de0e2\x2dd78d\x2d54281384df8d.mount: Deactivated successfully. Nov 8 00:06:38.393537 containerd[2134]: time="2025-11-08T00:06:38.393474425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:38.728728 containerd[2134]: time="2025-11-08T00:06:38.728674855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:38.750258 containerd[2134]: time="2025-11-08T00:06:38.747517399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:38.750258 containerd[2134]: time="2025-11-08T00:06:38.747693883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:38.778709 kubelet[3406]: E1108 00:06:38.776072 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:38.778709 kubelet[3406]: E1108 00:06:38.777257 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:38.795238 kubelet[3406]: E1108 00:06:38.792141 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twthk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd4d69d7c-ptmh4_calico-system(a118c8b1-dc8a-49b1-956e-fabb0c90510f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:38.798692 kubelet[3406]: E1108 00:06:38.798162 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:06:38.906952 kubelet[3406]: I1108 00:06:38.906694 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z6wmz" podStartSLOduration=56.906669488 podStartE2EDuration="56.906669488s" podCreationTimestamp="2025-11-08 00:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:38.899358056 +0000 UTC m=+61.025223641" watchObservedRunningTime="2025-11-08 00:06:38.906669488 +0000 UTC m=+61.032535061" Nov 8 00:06:38.926838 systemd-resolved[2026]: Under memory pressure, flushing caches. Nov 8 00:06:38.931435 systemd-journald[1615]: Under memory pressure, flushing caches. Nov 8 00:06:38.926909 systemd-resolved[2026]: Flushed all caches. Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.751 [INFO][5618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.777 [INFO][5618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" iface="eth0" netns="/var/run/netns/cni-60ab7070-3dc0-6f9b-5636-01d0c47a89b7" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.779 [INFO][5618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" iface="eth0" netns="/var/run/netns/cni-60ab7070-3dc0-6f9b-5636-01d0c47a89b7" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.780 [INFO][5618] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" iface="eth0" netns="/var/run/netns/cni-60ab7070-3dc0-6f9b-5636-01d0c47a89b7" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.780 [INFO][5618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.780 [INFO][5618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.886 [INFO][5657] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.887 [INFO][5657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.887 [INFO][5657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.927 [WARNING][5657] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.927 [INFO][5657] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.944 [INFO][5657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:38.987221 containerd[2134]: 2025-11-08 00:06:38.965 [INFO][5618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:06:38.997224 systemd[1]: run-netns-cni\x2d60ab7070\x2d3dc0\x2d6f9b\x2d5636\x2d01d0c47a89b7.mount: Deactivated successfully. Nov 8 00:06:38.999223 containerd[2134]: time="2025-11-08T00:06:38.998259368Z" level=info msg="TearDown network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\" successfully" Nov 8 00:06:38.999223 containerd[2134]: time="2025-11-08T00:06:38.998309756Z" level=info msg="StopPodSandbox for \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\" returns successfully" Nov 8 00:06:39.007399 containerd[2134]: time="2025-11-08T00:06:39.006879208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw22z,Uid:5962793e-cd47-45ea-84d0-190de5cbdb54,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:39.121599 systemd-networkd[1688]: calicc988add759: Gained IPv6LL Nov 8 00:06:39.487579 systemd-networkd[1688]: caliadd8ceeaf7f: Link UP Nov 8 00:06:39.494538 systemd-networkd[1688]: caliadd8ceeaf7f: Gained carrier Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:38.918 [WARNING][5635] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:38.929 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:38.936 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" iface="eth0" netns="" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:38.936 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:38.936 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:39.403 [INFO][5665] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:39.404 [INFO][5665] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:39.404 [INFO][5665] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:39.436 [WARNING][5665] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:39.437 [INFO][5665] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:39.449 [INFO][5665] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:39.504122 containerd[2134]: 2025-11-08 00:06:39.469 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:39.504122 containerd[2134]: time="2025-11-08T00:06:39.504010375Z" level=info msg="TearDown network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\" successfully" Nov 8 00:06:39.504122 containerd[2134]: time="2025-11-08T00:06:39.504047095Z" level=info msg="StopPodSandbox for \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\" returns successfully" Nov 8 00:06:39.516902 containerd[2134]: time="2025-11-08T00:06:39.516795991Z" level=info msg="RemovePodSandbox for \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\"" Nov 8 00:06:39.516902 containerd[2134]: time="2025-11-08T00:06:39.516873523Z" level=info msg="Forcibly stopping sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\"" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:38.557 [INFO][5572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0 goldmane-666569f655- calico-system 19e663c5-ada4-41f4-b329-6d803ea3d32d 1061 0 2025-11-08 00:06:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-187 goldmane-666569f655-qnsdl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliadd8ceeaf7f [] [] }} ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:38.561 [INFO][5572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.029 [INFO][5650] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" HandleID="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.037 [INFO][5650] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" HandleID="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001204e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-187", "pod":"goldmane-666569f655-qnsdl", "timestamp":"2025-11-08 00:06:39.029930764 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.037 [INFO][5650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.038 [INFO][5650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.038 [INFO][5650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.102 [INFO][5650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.232 [INFO][5650] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.266 [INFO][5650] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.279 [INFO][5650] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.290 [INFO][5650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.291 [INFO][5650] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.324 [INFO][5650] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573 Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.360 [INFO][5650] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.389 [INFO][5650] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.70/26] block=192.168.96.64/26 handle="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.390 [INFO][5650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.70/26] handle="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" host="ip-172-31-28-187" Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.390 [INFO][5650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:39.612850 containerd[2134]: 2025-11-08 00:06:39.391 [INFO][5650] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.70/26] IPv6=[] ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" HandleID="k8s-pod-network.6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:39.614088 containerd[2134]: 2025-11-08 00:06:39.422 [INFO][5572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"19e663c5-ada4-41f4-b329-6d803ea3d32d", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"goldmane-666569f655-qnsdl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliadd8ceeaf7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:39.614088 containerd[2134]: 2025-11-08 00:06:39.429 [INFO][5572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.70/32] ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:39.614088 containerd[2134]: 2025-11-08 00:06:39.432 [INFO][5572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadd8ceeaf7f ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:39.614088 containerd[2134]: 2025-11-08 00:06:39.503 [INFO][5572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:39.614088 containerd[2134]: 2025-11-08 00:06:39.523 [INFO][5572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"19e663c5-ada4-41f4-b329-6d803ea3d32d", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573", Pod:"goldmane-666569f655-qnsdl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliadd8ceeaf7f", MAC:"36:a9:8f:cd:69:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:39.614088 containerd[2134]: 2025-11-08 00:06:39.571 [INFO][5572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573" Namespace="calico-system" Pod="goldmane-666569f655-qnsdl" WorkloadEndpoint="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:38.989 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:38.999 [INFO][5629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" iface="eth0" netns="/var/run/netns/cni-d9b162f8-c72c-742f-c315-6b7b383e720f" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.007 [INFO][5629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" iface="eth0" netns="/var/run/netns/cni-d9b162f8-c72c-742f-c315-6b7b383e720f" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.014 [INFO][5629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" iface="eth0" netns="/var/run/netns/cni-d9b162f8-c72c-742f-c315-6b7b383e720f" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.015 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.015 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.572 [INFO][5672] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.575 [INFO][5672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.578 [INFO][5672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.609 [WARNING][5672] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.609 [INFO][5672] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.615 [INFO][5672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:39.649058 containerd[2134]: 2025-11-08 00:06:39.638 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:06:39.651877 containerd[2134]: time="2025-11-08T00:06:39.650958559Z" level=info msg="TearDown network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\" successfully" Nov 8 00:06:39.658379 containerd[2134]: time="2025-11-08T00:06:39.651139819Z" level=info msg="StopPodSandbox for \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\" returns successfully" Nov 8 00:06:39.666056 systemd[1]: run-netns-cni\x2dd9b162f8\x2dc72c\x2d742f\x2dc315\x2d6b7b383e720f.mount: Deactivated successfully. Nov 8 00:06:39.692115 containerd[2134]: time="2025-11-08T00:06:39.691492087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-bhc54,Uid:5611c66d-4585-41a1-9c50-eb23da03916c,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:06:39.846306 kubelet[3406]: E1108 00:06:39.845447 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:06:39.927438 containerd[2134]: time="2025-11-08T00:06:39.927244113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:39.938489 containerd[2134]: time="2025-11-08T00:06:39.927387117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:39.938489 containerd[2134]: time="2025-11-08T00:06:39.937456185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:39.940768 containerd[2134]: time="2025-11-08T00:06:39.940511865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:40.162850 systemd-networkd[1688]: cali246fe540a89: Link UP Nov 8 00:06:40.166635 systemd-networkd[1688]: cali246fe540a89: Gained carrier Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.555 [INFO][5678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0 csi-node-driver- calico-system 5962793e-cd47-45ea-84d0-190de5cbdb54 1074 0 2025-11-08 00:06:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-187 csi-node-driver-tw22z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali246fe540a89 [] [] }} ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.560 [INFO][5678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.914 [INFO][5718] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" HandleID="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.914 [INFO][5718] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" HandleID="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121df0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-187", "pod":"csi-node-driver-tw22z", "timestamp":"2025-11-08 00:06:39.914295513 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.914 [INFO][5718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.914 [INFO][5718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.914 [INFO][5718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:39.946 [INFO][5718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.018 [INFO][5718] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.040 [INFO][5718] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.051 [INFO][5718] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.069 [INFO][5718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.069 [INFO][5718] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.099 [INFO][5718] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8 Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.112 [INFO][5718] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.135 [INFO][5718] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.71/26] block=192.168.96.64/26 handle="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.139 [INFO][5718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.71/26] handle="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" host="ip-172-31-28-187" Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.139 [INFO][5718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:40.245862 containerd[2134]: 2025-11-08 00:06:40.139 [INFO][5718] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.71/26] IPv6=[] ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" HandleID="k8s-pod-network.177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:40.248818 containerd[2134]: 2025-11-08 00:06:40.150 [INFO][5678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5962793e-cd47-45ea-84d0-190de5cbdb54", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"csi-node-driver-tw22z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali246fe540a89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:40.248818 containerd[2134]: 2025-11-08 00:06:40.151 [INFO][5678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.71/32] ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:40.248818 containerd[2134]: 2025-11-08 00:06:40.151 [INFO][5678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali246fe540a89 ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:40.248818 containerd[2134]: 2025-11-08 00:06:40.171 [INFO][5678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:40.248818 containerd[2134]: 2025-11-08 00:06:40.173 [INFO][5678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5962793e-cd47-45ea-84d0-190de5cbdb54", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8", Pod:"csi-node-driver-tw22z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali246fe540a89", MAC:"f6:6d:8a:e4:9d:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:40.248818 containerd[2134]: 2025-11-08 00:06:40.221 [INFO][5678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8" Namespace="calico-system" Pod="csi-node-driver-tw22z" WorkloadEndpoint="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:39.910 [WARNING][5710] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" WorkloadEndpoint="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:39.910 [INFO][5710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:39.911 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" iface="eth0" netns="" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:39.911 [INFO][5710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:39.911 [INFO][5710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:40.175 [INFO][5757] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:40.180 [INFO][5757] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:40.180 [INFO][5757] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:40.243 [WARNING][5757] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:40.243 [INFO][5757] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" HandleID="k8s-pod-network.b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Workload="ip--172--31--28--187-k8s-whisker--8d85fcbf5--tlmjx-eth0" Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:40.256 [INFO][5757] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:40.290252 containerd[2134]: 2025-11-08 00:06:40.276 [INFO][5710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7" Nov 8 00:06:40.291042 containerd[2134]: time="2025-11-08T00:06:40.290333790Z" level=info msg="TearDown network for sandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\" successfully" Nov 8 00:06:40.308775 containerd[2134]: time="2025-11-08T00:06:40.308681143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:40.309498 containerd[2134]: time="2025-11-08T00:06:40.309344515Z" level=info msg="RemovePodSandbox \"b1436a69cc8d94184f44bb494c6c75fe99b97fcdf613cba5677f0de6616139b7\" returns successfully" Nov 8 00:06:40.315355 containerd[2134]: time="2025-11-08T00:06:40.314408923Z" level=info msg="StopPodSandbox for \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\"" Nov 8 00:06:40.386956 containerd[2134]: time="2025-11-08T00:06:40.386865859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnsdl,Uid:19e663c5-ada4-41f4-b329-6d803ea3d32d,Namespace:calico-system,Attempt:1,} returns sandbox id \"6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573\"" Nov 8 00:06:40.398388 containerd[2134]: time="2025-11-08T00:06:40.398167975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:40.398388 containerd[2134]: time="2025-11-08T00:06:40.398285671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:40.398901 containerd[2134]: time="2025-11-08T00:06:40.398325331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:40.400369 containerd[2134]: time="2025-11-08T00:06:40.399885355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:40.405639 containerd[2134]: time="2025-11-08T00:06:40.405530815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:40.566393 containerd[2134]: time="2025-11-08T00:06:40.566309048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw22z,Uid:5962793e-cd47-45ea-84d0-190de5cbdb54,Namespace:calico-system,Attempt:1,} returns sandbox id \"177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8\"" Nov 8 00:06:40.618272 systemd-networkd[1688]: cali4f79dc05451: Link UP Nov 8 00:06:40.620846 systemd-networkd[1688]: cali4f79dc05451: Gained carrier Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.275 [INFO][5741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0 calico-apiserver-bc8bf555f- calico-apiserver 5611c66d-4585-41a1-9c50-eb23da03916c 1084 0 2025-11-08 00:05:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bc8bf555f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-187 calico-apiserver-bc8bf555f-bhc54 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f79dc05451 [] [] }} ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.277 [INFO][5741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.474 [INFO][5797] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" HandleID="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.475 [INFO][5797] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" HandleID="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030c2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-187", "pod":"calico-apiserver-bc8bf555f-bhc54", "timestamp":"2025-11-08 00:06:40.474218887 +0000 UTC"}, Hostname:"ip-172-31-28-187", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.475 [INFO][5797] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.475 [INFO][5797] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.475 [INFO][5797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-187' Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.499 [INFO][5797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.515 [INFO][5797] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.531 [INFO][5797] ipam/ipam.go 511: Trying affinity for 192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.538 [INFO][5797] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.557 [INFO][5797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.64/26 host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.558 [INFO][5797] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.64/26 handle="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.568 [INFO][5797] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.585 [INFO][5797] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.64/26 handle="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.606 [INFO][5797] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.72/26] block=192.168.96.64/26 handle="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.606 [INFO][5797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.72/26] handle="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" host="ip-172-31-28-187" Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.606 [INFO][5797] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:40.678329 containerd[2134]: 2025-11-08 00:06:40.606 [INFO][5797] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.72/26] IPv6=[] ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" HandleID="k8s-pod-network.95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:40.681620 containerd[2134]: 2025-11-08 00:06:40.610 [INFO][5741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5611c66d-4585-41a1-9c50-eb23da03916c", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"", Pod:"calico-apiserver-bc8bf555f-bhc54", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f79dc05451", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:40.681620 containerd[2134]: 2025-11-08 00:06:40.611 [INFO][5741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.72/32] ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:40.681620 containerd[2134]: 2025-11-08 00:06:40.611 [INFO][5741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f79dc05451 ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:40.681620 containerd[2134]: 2025-11-08 00:06:40.619 [INFO][5741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:40.681620 containerd[2134]: 2025-11-08 00:06:40.622 [INFO][5741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5611c66d-4585-41a1-9c50-eb23da03916c", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed", Pod:"calico-apiserver-bc8bf555f-bhc54", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f79dc05451", MAC:"46:2c:48:87:07:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:40.681620 containerd[2134]: 2025-11-08 00:06:40.660 [INFO][5741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed" Namespace="calico-apiserver" Pod="calico-apiserver-bc8bf555f-bhc54" WorkloadEndpoint="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.492 [WARNING][5817] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a53e75e-3508-43e1-9046-febaec8a3194", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65", Pod:"coredns-668d6bf9bc-dl6ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19953ff7d94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.493 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.493 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" iface="eth0" netns="" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.493 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.493 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.638 [INFO][5858] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.640 [INFO][5858] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.640 [INFO][5858] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.687 [WARNING][5858] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.687 [INFO][5858] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.691 [INFO][5858] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:40.706605 containerd[2134]: 2025-11-08 00:06:40.700 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.706605 containerd[2134]: time="2025-11-08T00:06:40.704871417Z" level=info msg="TearDown network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\" successfully" Nov 8 00:06:40.706605 containerd[2134]: time="2025-11-08T00:06:40.704913189Z" level=info msg="StopPodSandbox for \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\" returns successfully" Nov 8 00:06:40.708957 containerd[2134]: time="2025-11-08T00:06:40.708894045Z" level=info msg="RemovePodSandbox for \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\"" Nov 8 00:06:40.708957 containerd[2134]: time="2025-11-08T00:06:40.708960633Z" level=info msg="Forcibly stopping sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\"" Nov 8 00:06:40.753735 containerd[2134]: time="2025-11-08T00:06:40.752784789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:40.753735 containerd[2134]: time="2025-11-08T00:06:40.752903865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:40.753735 containerd[2134]: time="2025-11-08T00:06:40.752968521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:40.753735 containerd[2134]: time="2025-11-08T00:06:40.753244485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:40.926419 containerd[2134]: time="2025-11-08T00:06:40.926332798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bc8bf555f-bhc54,Uid:5611c66d-4585-41a1-9c50-eb23da03916c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed\"" Nov 8 00:06:40.977643 systemd-resolved[2026]: Under memory pressure, flushing caches. Nov 8 00:06:40.977683 systemd-resolved[2026]: Flushed all caches. Nov 8 00:06:40.979983 systemd-journald[1615]: Under memory pressure, flushing caches. Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.857 [WARNING][5896] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a53e75e-3508-43e1-9046-febaec8a3194", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"ce76e0307a6dd9a3a7ba2d012e82fa00915def70ebc2f48222ac7f0e9cbf5f65", Pod:"coredns-668d6bf9bc-dl6ch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19953ff7d94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.857 [INFO][5896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.857 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" iface="eth0" netns="" Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.857 [INFO][5896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.858 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.955 [INFO][5931] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.956 [INFO][5931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.956 [INFO][5931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.970 [WARNING][5931] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.970 [INFO][5931] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" HandleID="k8s-pod-network.f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--dl6ch-eth0" Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.973 [INFO][5931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:40.984157 containerd[2134]: 2025-11-08 00:06:40.979 [INFO][5896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f" Nov 8 00:06:40.984157 containerd[2134]: time="2025-11-08T00:06:40.983790886Z" level=info msg="TearDown network for sandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\" successfully" Nov 8 00:06:40.992669 containerd[2134]: time="2025-11-08T00:06:40.992487658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:40.993143 containerd[2134]: time="2025-11-08T00:06:40.992919322Z" level=info msg="RemovePodSandbox \"f14c4386ca3b9137b29637bbb04d949311f1670355fa150a3588c2739cf0e57f\" returns successfully" Nov 8 00:06:40.994025 containerd[2134]: time="2025-11-08T00:06:40.993947482Z" level=info msg="StopPodSandbox for \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\"" Nov 8 00:06:41.044237 containerd[2134]: time="2025-11-08T00:06:41.044023458Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:41.048239 containerd[2134]: time="2025-11-08T00:06:41.047021166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:41.048239 containerd[2134]: time="2025-11-08T00:06:41.047190474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:41.049462 kubelet[3406]: E1108 00:06:41.047400 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:41.049462 kubelet[3406]: E1108 00:06:41.047481 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:41.049462 kubelet[3406]: E1108 00:06:41.047806 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjq8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qnsdl_calico-system(19e663c5-ada4-41f4-b329-6d803ea3d32d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:41.051929 kubelet[3406]: E1108 00:06:41.050795 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:06:41.052118 containerd[2134]: time="2025-11-08T00:06:41.051791910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.073 [WARNING][5952] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2b4786b-bdcd-41e2-8651-d03da4e624c0", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246", Pod:"calico-apiserver-bc8bf555f-2vp5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cda410b653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.073 [INFO][5952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.073 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" iface="eth0" netns="" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.073 [INFO][5952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.073 [INFO][5952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.128 [INFO][5959] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.129 [INFO][5959] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.129 [INFO][5959] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.143 [WARNING][5959] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.143 [INFO][5959] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.146 [INFO][5959] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:41.154038 containerd[2134]: 2025-11-08 00:06:41.150 [INFO][5952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.156052 containerd[2134]: time="2025-11-08T00:06:41.154096795Z" level=info msg="TearDown network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\" successfully" Nov 8 00:06:41.156052 containerd[2134]: time="2025-11-08T00:06:41.154136719Z" level=info msg="StopPodSandbox for \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\" returns successfully" Nov 8 00:06:41.156052 containerd[2134]: time="2025-11-08T00:06:41.154928371Z" level=info msg="RemovePodSandbox for \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\"" Nov 8 00:06:41.156052 containerd[2134]: time="2025-11-08T00:06:41.154989139Z" level=info msg="Forcibly stopping sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\"" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.248 [WARNING][5973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2b4786b-bdcd-41e2-8651-d03da4e624c0", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"0a8de072feeeb250f677c6b30f3c70e05670d24378e675420ff2f79c4133b246", Pod:"calico-apiserver-bc8bf555f-2vp5h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cda410b653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.248 [INFO][5973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.249 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" iface="eth0" netns="" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.249 [INFO][5973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.249 [INFO][5973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.291 [INFO][5980] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.291 [INFO][5980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.291 [INFO][5980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.305 [WARNING][5980] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.305 [INFO][5980] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" HandleID="k8s-pod-network.5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--2vp5h-eth0" Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.308 [INFO][5980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:41.316781 containerd[2134]: 2025-11-08 00:06:41.311 [INFO][5973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0" Nov 8 00:06:41.316781 containerd[2134]: time="2025-11-08T00:06:41.315087644Z" level=info msg="TearDown network for sandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\" successfully" Nov 8 00:06:41.323184 containerd[2134]: time="2025-11-08T00:06:41.323123084Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:41.323462 containerd[2134]: time="2025-11-08T00:06:41.323420900Z" level=info msg="RemovePodSandbox \"5c8f13802683e1366d5ec57dcff6cd78805ca2ccf14c168de9eb3512026d9ce0\" returns successfully" Nov 8 00:06:41.324483 containerd[2134]: time="2025-11-08T00:06:41.324422492Z" level=info msg="StopPodSandbox for \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\"" Nov 8 00:06:41.362133 containerd[2134]: time="2025-11-08T00:06:41.362054420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:41.364676 containerd[2134]: time="2025-11-08T00:06:41.364423796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:06:41.364676 containerd[2134]: time="2025-11-08T00:06:41.364520636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:06:41.364877 kubelet[3406]: E1108 00:06:41.364777 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:41.364877 kubelet[3406]: E1108 00:06:41.364846 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:41.365599 kubelet[3406]: E1108 00:06:41.365146 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lp72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:41.366768 containerd[2134]: time="2025-11-08T00:06:41.366710144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.412 [WARNING][5994] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6eae301-8fc0-4763-acc1-9e144d4c979d", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66", Pod:"coredns-668d6bf9bc-z6wmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93a991e2a18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.414 [INFO][5994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.414 [INFO][5994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" iface="eth0" netns="" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.414 [INFO][5994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.414 [INFO][5994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.462 [INFO][6001] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.462 [INFO][6001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.462 [INFO][6001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.477 [WARNING][6001] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.477 [INFO][6001] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.480 [INFO][6001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:41.487159 containerd[2134]: 2025-11-08 00:06:41.483 [INFO][5994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.489418 containerd[2134]: time="2025-11-08T00:06:41.487209056Z" level=info msg="TearDown network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\" successfully" Nov 8 00:06:41.489418 containerd[2134]: time="2025-11-08T00:06:41.487253468Z" level=info msg="StopPodSandbox for \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\" returns successfully" Nov 8 00:06:41.487825 systemd-networkd[1688]: caliadd8ceeaf7f: Gained IPv6LL Nov 8 00:06:41.488445 systemd-networkd[1688]: cali246fe540a89: Gained IPv6LL Nov 8 00:06:41.493472 containerd[2134]: time="2025-11-08T00:06:41.491992112Z" level=info msg="RemovePodSandbox for \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\"" Nov 8 00:06:41.493472 containerd[2134]: time="2025-11-08T00:06:41.492045332Z" level=info msg="Forcibly stopping sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\"" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.572 [WARNING][6015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6eae301-8fc0-4763-acc1-9e144d4c979d", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"60398f70b9eba52b80312fc090403bc9c1f548cec29af923cf3b92af212f5f66", Pod:"coredns-668d6bf9bc-z6wmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93a991e2a18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.573 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.573 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" iface="eth0" netns="" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.573 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.574 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.623 [INFO][6022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.623 [INFO][6022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.623 [INFO][6022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.642 [WARNING][6022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.642 [INFO][6022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" HandleID="k8s-pod-network.4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Workload="ip--172--31--28--187-k8s-coredns--668d6bf9bc--z6wmz-eth0" Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.647 [INFO][6022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:41.654859 containerd[2134]: 2025-11-08 00:06:41.651 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0" Nov 8 00:06:41.656066 containerd[2134]: time="2025-11-08T00:06:41.654804225Z" level=info msg="TearDown network for sandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\" successfully" Nov 8 00:06:41.662491 containerd[2134]: time="2025-11-08T00:06:41.662413185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:41.662702 containerd[2134]: time="2025-11-08T00:06:41.662532153Z" level=info msg="RemovePodSandbox \"4835cf21f53358b2b2d9fab993d2d36eb58a01accb047d1779856c2347e825e0\" returns successfully" Nov 8 00:06:41.673720 containerd[2134]: time="2025-11-08T00:06:41.673406349Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:41.675842 containerd[2134]: time="2025-11-08T00:06:41.675679173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:41.675842 containerd[2134]: time="2025-11-08T00:06:41.675774765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:41.676730 kubelet[3406]: E1108 00:06:41.675971 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:41.676730 kubelet[3406]: E1108 00:06:41.676038 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:41.676730 kubelet[3406]: E1108 00:06:41.676394 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrpkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-bhc54_calico-apiserver(5611c66d-4585-41a1-9c50-eb23da03916c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:41.680392 kubelet[3406]: E1108 00:06:41.679057 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:06:41.680581 containerd[2134]: time="2025-11-08T00:06:41.677070885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:06:41.849494 kubelet[3406]: E1108 00:06:41.847745 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:06:41.867652 kubelet[3406]: E1108 00:06:41.867233 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:06:41.995600 containerd[2134]: time="2025-11-08T00:06:41.995342735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:41.998370 containerd[2134]: time="2025-11-08T00:06:41.998183459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:06:41.998370 containerd[2134]: time="2025-11-08T00:06:41.998289719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:06:41.998768 kubelet[3406]: E1108 00:06:41.998545 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:41.998907 kubelet[3406]: E1108 00:06:41.998842 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:41.999428 kubelet[3406]: E1108 00:06:41.999114 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lp72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:42.000508 kubelet[3406]: E1108 00:06:42.000415 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:42.126978 systemd-networkd[1688]: cali4f79dc05451: Gained IPv6LL Nov 8 00:06:42.872322 kubelet[3406]: E1108 00:06:42.871756 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:06:42.874721 kubelet[3406]: E1108 00:06:42.874476 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:06:43.214119 systemd[1]: Started sshd@8-172.31.28.187:22-139.178.89.65:59340.service - OpenSSH per-connection server daemon (139.178.89.65:59340). Nov 8 00:06:43.413147 sshd[6039]: Accepted publickey for core from 139.178.89.65 port 59340 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:43.416806 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:43.429679 systemd-logind[2107]: New session 9 of user core. Nov 8 00:06:43.433694 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:06:43.751126 sshd[6039]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:43.760057 systemd[1]: sshd@8-172.31.28.187:22-139.178.89.65:59340.service: Deactivated successfully. Nov 8 00:06:43.767872 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:06:43.770053 systemd-logind[2107]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:06:43.772954 systemd-logind[2107]: Removed session 9. Nov 8 00:06:44.839936 ntpd[2088]: Listen normally on 6 vxlan.calico 192.168.96.64:123 Nov 8 00:06:44.840094 ntpd[2088]: Listen normally on 7 cali9b73900fe8c [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 6 vxlan.calico 192.168.96.64:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 7 cali9b73900fe8c [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 8 vxlan.calico [fe80::64d0:baff:feee:4d22%5]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 9 cali19953ff7d94 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 10 cali3cda410b653 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 11 cali93a991e2a18 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 12 calicc988add759 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 13 caliadd8ceeaf7f [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 14 cali246fe540a89 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:06:44.844008 ntpd[2088]: 8 Nov 00:06:44 ntpd[2088]: Listen normally on 15 cali4f79dc05451 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:06:44.840188 ntpd[2088]: Listen normally on 8 vxlan.calico [fe80::64d0:baff:feee:4d22%5]:123 Nov 8 00:06:44.840264 ntpd[2088]: Listen normally on 9 cali19953ff7d94 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:06:44.840338 ntpd[2088]: Listen normally on 10 cali3cda410b653 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:06:44.840416 ntpd[2088]: Listen normally on 11 cali93a991e2a18 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:06:44.840864 ntpd[2088]: Listen normally on 12 calicc988add759 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:06:44.840993 ntpd[2088]: Listen normally on 13 caliadd8ceeaf7f [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:06:44.841065 ntpd[2088]: Listen normally on 14 cali246fe540a89 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:06:44.841146 ntpd[2088]: Listen normally on 15 cali4f79dc05451 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:06:47.084372 containerd[2134]: time="2025-11-08T00:06:47.083902104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:06:47.380697 containerd[2134]: time="2025-11-08T00:06:47.380416334Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:47.382972 containerd[2134]: time="2025-11-08T00:06:47.382784834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:06:47.382972 containerd[2134]: time="2025-11-08T00:06:47.382885034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:06:47.383229 kubelet[3406]: E1108 00:06:47.383149 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:47.385351 kubelet[3406]: E1108 00:06:47.383238 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:47.385351 kubelet[3406]: E1108 00:06:47.383394 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad073e8bb50749a3ae91e94ed2b29ac5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:47.388840 containerd[2134]: time="2025-11-08T00:06:47.387771962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:06:47.711596 containerd[2134]: time="2025-11-08T00:06:47.711374559Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:47.714854 containerd[2134]: time="2025-11-08T00:06:47.714587823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:06:47.715399 containerd[2134]: time="2025-11-08T00:06:47.714642675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:47.718063 kubelet[3406]: E1108 00:06:47.716160 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:47.718063 kubelet[3406]: E1108 00:06:47.716235 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:47.718063 kubelet[3406]: E1108 00:06:47.716385 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:47.723607 kubelet[3406]: E1108 00:06:47.722786 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:06:48.083286 containerd[2134]: time="2025-11-08T00:06:48.083117473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:48.393353 containerd[2134]: time="2025-11-08T00:06:48.393251595Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:48.395753 containerd[2134]: time="2025-11-08T00:06:48.395636007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:48.395867 containerd[2134]: time="2025-11-08T00:06:48.395817567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:48.396065 kubelet[3406]: E1108 00:06:48.396006 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:48.396935 kubelet[3406]: E1108 00:06:48.396077 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:48.396935 kubelet[3406]: E1108 00:06:48.396258 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fj5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-2vp5h_calico-apiserver(e2b4786b-bdcd-41e2-8651-d03da4e624c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:48.398061 kubelet[3406]: E1108 00:06:48.397527 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:06:48.784095 systemd[1]: Started sshd@9-172.31.28.187:22-139.178.89.65:39860.service - OpenSSH per-connection server daemon (139.178.89.65:39860). Nov 8 00:06:48.974932 sshd[6058]: Accepted publickey for core from 139.178.89.65 port 39860 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:48.977695 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:48.987185 systemd-logind[2107]: New session 10 of user core. Nov 8 00:06:48.994100 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:06:49.283939 sshd[6058]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:49.292971 systemd[1]: sshd@9-172.31.28.187:22-139.178.89.65:39860.service: Deactivated successfully. Nov 8 00:06:49.293790 systemd-logind[2107]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:06:49.300547 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:06:49.302703 systemd-logind[2107]: Removed session 10. Nov 8 00:06:49.314079 systemd[1]: Started sshd@10-172.31.28.187:22-139.178.89.65:39862.service - OpenSSH per-connection server daemon (139.178.89.65:39862). Nov 8 00:06:49.503958 sshd[6073]: Accepted publickey for core from 139.178.89.65 port 39862 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:49.506851 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:49.515810 systemd-logind[2107]: New session 11 of user core. Nov 8 00:06:49.524278 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:06:49.874884 sshd[6073]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:49.892801 systemd-logind[2107]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:06:49.896119 systemd[1]: sshd@10-172.31.28.187:22-139.178.89.65:39862.service: Deactivated successfully. Nov 8 00:06:49.911550 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:06:49.932134 systemd[1]: Started sshd@11-172.31.28.187:22-139.178.89.65:39872.service - OpenSSH per-connection server daemon (139.178.89.65:39872). Nov 8 00:06:49.933924 systemd-logind[2107]: Removed session 11. Nov 8 00:06:50.128652 sshd[6086]: Accepted publickey for core from 139.178.89.65 port 39872 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:50.127923 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:50.141010 systemd-logind[2107]: New session 12 of user core. Nov 8 00:06:50.149094 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:06:50.412314 sshd[6086]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:50.418422 systemd[1]: sshd@11-172.31.28.187:22-139.178.89.65:39872.service: Deactivated successfully. Nov 8 00:06:50.419385 systemd-logind[2107]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:06:50.426800 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:06:50.431346 systemd-logind[2107]: Removed session 12. Nov 8 00:06:54.082608 containerd[2134]: time="2025-11-08T00:06:54.081822667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:54.355257 containerd[2134]: time="2025-11-08T00:06:54.355071884Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:54.357335 containerd[2134]: time="2025-11-08T00:06:54.357215924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:54.357531 containerd[2134]: time="2025-11-08T00:06:54.357360704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:54.357626 kubelet[3406]: E1108 00:06:54.357582 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:54.358209 kubelet[3406]: E1108 00:06:54.357648 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:54.358209 kubelet[3406]: E1108 00:06:54.357815 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrpkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-bhc54_calico-apiserver(5611c66d-4585-41a1-9c50-eb23da03916c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:54.359980 kubelet[3406]: E1108 00:06:54.359675 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:06:55.083512 containerd[2134]: time="2025-11-08T00:06:55.083436044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:55.382917 containerd[2134]: time="2025-11-08T00:06:55.382710549Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:55.384963 containerd[2134]: time="2025-11-08T00:06:55.384784341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:55.384963 containerd[2134]: time="2025-11-08T00:06:55.384917877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:55.386478 kubelet[3406]: E1108 00:06:55.385322 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:55.386478 kubelet[3406]: E1108 00:06:55.385388 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:55.386478 kubelet[3406]: E1108 00:06:55.385721 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twthk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd4d69d7c-ptmh4_calico-system(a118c8b1-dc8a-49b1-956e-fabb0c90510f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:55.388519 kubelet[3406]: E1108 00:06:55.387947 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:06:55.389275 containerd[2134]: time="2025-11-08T00:06:55.387911649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:55.442121 systemd[1]: Started sshd@12-172.31.28.187:22-139.178.89.65:39874.service - OpenSSH per-connection server daemon (139.178.89.65:39874). Nov 8 00:06:55.624498 sshd[6111]: Accepted publickey for core from 139.178.89.65 port 39874 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:55.627690 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:55.636778 systemd-logind[2107]: New session 13 of user core. Nov 8 00:06:55.645274 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:06:55.679627 containerd[2134]: time="2025-11-08T00:06:55.679530047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:55.681653 containerd[2134]: time="2025-11-08T00:06:55.681552383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:55.681776 containerd[2134]: time="2025-11-08T00:06:55.681720143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:55.681965 kubelet[3406]: E1108 00:06:55.681909 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:55.682059 kubelet[3406]: E1108 00:06:55.681983 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:55.682764 kubelet[3406]: E1108 00:06:55.682185 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjq8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qnsdl_calico-system(19e663c5-ada4-41f4-b329-6d803ea3d32d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:55.683698 kubelet[3406]: E1108 00:06:55.683644 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:06:55.908470 sshd[6111]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:55.919893 systemd[1]: sshd@12-172.31.28.187:22-139.178.89.65:39874.service: Deactivated successfully. Nov 8 00:06:55.926071 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:06:55.927136 systemd-logind[2107]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:06:55.930899 systemd-logind[2107]: Removed session 13. Nov 8 00:06:58.084184 containerd[2134]: time="2025-11-08T00:06:58.083687663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:06:58.397018 containerd[2134]: time="2025-11-08T00:06:58.396878472Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:58.399115 containerd[2134]: time="2025-11-08T00:06:58.399056772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:06:58.399239 containerd[2134]: time="2025-11-08T00:06:58.399192900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:06:58.399433 kubelet[3406]: E1108 00:06:58.399378 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:58.400079 kubelet[3406]: E1108 00:06:58.399450 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:58.400079 kubelet[3406]: E1108 00:06:58.399645 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lp72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:58.403315 containerd[2134]: time="2025-11-08T00:06:58.403210188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:06:58.703698 containerd[2134]: time="2025-11-08T00:06:58.703454150Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:58.705841 containerd[2134]: time="2025-11-08T00:06:58.705768206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:06:58.705979 containerd[2134]: time="2025-11-08T00:06:58.705926498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:06:58.706179 kubelet[3406]: E1108 00:06:58.706121 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:58.706339 kubelet[3406]: E1108 00:06:58.706194 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:58.706431 kubelet[3406]: E1108 00:06:58.706360 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lp72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:58.708239 kubelet[3406]: E1108 00:06:58.708133 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:07:00.940384 systemd[1]: Started sshd@13-172.31.28.187:22-139.178.89.65:37694.service - OpenSSH per-connection server daemon (139.178.89.65:37694). Nov 8 00:07:01.131195 sshd[6150]: Accepted publickey for core from 139.178.89.65 port 37694 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:01.134543 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:01.143734 systemd-logind[2107]: New session 14 of user core. Nov 8 00:07:01.151741 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:07:01.442939 sshd[6150]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:01.450763 systemd-logind[2107]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:07:01.452254 systemd[1]: sshd@13-172.31.28.187:22-139.178.89.65:37694.service: Deactivated successfully. Nov 8 00:07:01.458662 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:07:01.460893 systemd-logind[2107]: Removed session 14. Nov 8 00:07:02.085489 kubelet[3406]: E1108 00:07:02.084146 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:07:02.088981 kubelet[3406]: E1108 00:07:02.088881 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:07:06.481089 systemd[1]: Started sshd@14-172.31.28.187:22-139.178.89.65:48360.service - OpenSSH per-connection server daemon (139.178.89.65:48360). Nov 8 00:07:06.663405 sshd[6164]: Accepted publickey for core from 139.178.89.65 port 48360 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:06.666217 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:06.675873 systemd-logind[2107]: New session 15 of user core. Nov 8 00:07:06.693676 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:07:06.958939 sshd[6164]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:06.967178 systemd[1]: sshd@14-172.31.28.187:22-139.178.89.65:48360.service: Deactivated successfully. Nov 8 00:07:06.977931 systemd-logind[2107]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:07:06.978078 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:07:06.982110 systemd-logind[2107]: Removed session 15. Nov 8 00:07:08.084156 kubelet[3406]: E1108 00:07:08.083647 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:07:09.083691 kubelet[3406]: E1108 00:07:09.083124 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:07:10.085113 kubelet[3406]: E1108 00:07:10.084931 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:07:11.994286 systemd[1]: Started sshd@15-172.31.28.187:22-139.178.89.65:48368.service - OpenSSH per-connection server daemon (139.178.89.65:48368). Nov 8 00:07:12.098197 kubelet[3406]: E1108 00:07:12.095856 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:07:12.230644 sshd[6179]: Accepted publickey for core from 139.178.89.65 port 48368 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:12.232639 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:12.244894 systemd-logind[2107]: New session 16 of user core. Nov 8 00:07:12.254153 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:07:12.565087 sshd[6179]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:12.577509 systemd[1]: sshd@15-172.31.28.187:22-139.178.89.65:48368.service: Deactivated successfully. Nov 8 00:07:12.589133 systemd-logind[2107]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:07:12.609018 systemd[1]: Started sshd@16-172.31.28.187:22-139.178.89.65:48382.service - OpenSSH per-connection server daemon (139.178.89.65:48382). Nov 8 00:07:12.611770 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:07:12.617457 systemd-logind[2107]: Removed session 16. Nov 8 00:07:12.876196 sshd[6193]: Accepted publickey for core from 139.178.89.65 port 48382 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:12.877388 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:12.895879 systemd-logind[2107]: New session 17 of user core. Nov 8 00:07:12.903199 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:07:13.088594 containerd[2134]: time="2025-11-08T00:07:13.086305633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:07:13.407149 containerd[2134]: time="2025-11-08T00:07:13.406926099Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:13.408706 containerd[2134]: time="2025-11-08T00:07:13.408141459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:07:13.408706 containerd[2134]: time="2025-11-08T00:07:13.408327447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:07:13.409851 kubelet[3406]: E1108 00:07:13.408591 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:13.409851 kubelet[3406]: E1108 00:07:13.408657 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:13.409851 kubelet[3406]: E1108 00:07:13.408802 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad073e8bb50749a3ae91e94ed2b29ac5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:13.415289 containerd[2134]: time="2025-11-08T00:07:13.415226715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:07:13.588383 sshd[6193]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:13.603882 systemd[1]: sshd@16-172.31.28.187:22-139.178.89.65:48382.service: Deactivated successfully. Nov 8 00:07:13.605767 systemd-logind[2107]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:07:13.616143 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:07:13.630012 systemd[1]: Started sshd@17-172.31.28.187:22-139.178.89.65:48386.service - OpenSSH per-connection server daemon (139.178.89.65:48386). Nov 8 00:07:13.634064 systemd-logind[2107]: Removed session 17. Nov 8 00:07:13.752110 containerd[2134]: time="2025-11-08T00:07:13.751102793Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:13.753707 containerd[2134]: time="2025-11-08T00:07:13.753550325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:07:13.753707 containerd[2134]: time="2025-11-08T00:07:13.753620453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:13.753993 kubelet[3406]: E1108 00:07:13.753905 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:13.754126 kubelet[3406]: E1108 00:07:13.753995 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:13.754857 kubelet[3406]: E1108 00:07:13.754250 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:13.755678 kubelet[3406]: E1108 00:07:13.755604 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:07:13.827357 sshd[6211]: Accepted publickey for core from 139.178.89.65 port 48386 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:13.830078 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:13.838332 systemd-logind[2107]: New session 18 of user core. Nov 8 00:07:13.844285 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:07:14.945276 sshd[6211]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:14.962828 systemd[1]: sshd@17-172.31.28.187:22-139.178.89.65:48386.service: Deactivated successfully. Nov 8 00:07:14.982610 systemd-logind[2107]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:07:14.990269 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:07:15.004414 systemd[1]: Started sshd@18-172.31.28.187:22-139.178.89.65:48402.service - OpenSSH per-connection server daemon (139.178.89.65:48402). Nov 8 00:07:15.007314 systemd-logind[2107]: Removed session 18. Nov 8 00:07:15.206632 sshd[6232]: Accepted publickey for core from 139.178.89.65 port 48402 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:15.209885 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:15.226359 systemd-logind[2107]: New session 19 of user core. Nov 8 00:07:15.231321 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:07:15.783819 sshd[6232]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:15.794620 systemd-logind[2107]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:07:15.796311 systemd[1]: sshd@18-172.31.28.187:22-139.178.89.65:48402.service: Deactivated successfully. Nov 8 00:07:15.807391 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:07:15.814358 systemd-logind[2107]: Removed session 19. Nov 8 00:07:15.820072 systemd[1]: Started sshd@19-172.31.28.187:22-139.178.89.65:48418.service - OpenSSH per-connection server daemon (139.178.89.65:48418). Nov 8 00:07:16.006780 sshd[6244]: Accepted publickey for core from 139.178.89.65 port 48418 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:16.010121 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:16.019378 systemd-logind[2107]: New session 20 of user core. Nov 8 00:07:16.026975 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:07:16.306036 sshd[6244]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:16.317880 systemd[1]: sshd@19-172.31.28.187:22-139.178.89.65:48418.service: Deactivated successfully. Nov 8 00:07:16.318801 systemd-logind[2107]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:07:16.328407 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:07:16.333422 systemd-logind[2107]: Removed session 20. Nov 8 00:07:17.083687 containerd[2134]: time="2025-11-08T00:07:17.082753925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:17.380465 containerd[2134]: time="2025-11-08T00:07:17.380083291Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:17.381715 containerd[2134]: time="2025-11-08T00:07:17.381489871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:17.381921 containerd[2134]: time="2025-11-08T00:07:17.381607351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:17.382120 kubelet[3406]: E1108 00:07:17.382058 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:17.383621 kubelet[3406]: E1108 00:07:17.382135 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:17.383621 kubelet[3406]: E1108 00:07:17.382313 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fj5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-2vp5h_calico-apiserver(e2b4786b-bdcd-41e2-8651-d03da4e624c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:17.384287 kubelet[3406]: E1108 00:07:17.384118 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:07:20.085461 containerd[2134]: time="2025-11-08T00:07:20.085394624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:07:20.378898 containerd[2134]: time="2025-11-08T00:07:20.378813718Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:20.380117 containerd[2134]: time="2025-11-08T00:07:20.380041234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:07:20.380309 containerd[2134]: time="2025-11-08T00:07:20.380200582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:20.380586 kubelet[3406]: E1108 00:07:20.380492 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:20.381535 kubelet[3406]: E1108 00:07:20.380601 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:20.381535 kubelet[3406]: E1108 00:07:20.380804 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjq8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qnsdl_calico-system(19e663c5-ada4-41f4-b329-6d803ea3d32d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:20.382457 kubelet[3406]: E1108 00:07:20.382375 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:07:21.083136 containerd[2134]: time="2025-11-08T00:07:21.082847961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:21.341468 systemd[1]: Started sshd@20-172.31.28.187:22-139.178.89.65:40648.service - OpenSSH per-connection server daemon (139.178.89.65:40648). Nov 8 00:07:21.372228 containerd[2134]: time="2025-11-08T00:07:21.371988574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:21.373885 containerd[2134]: time="2025-11-08T00:07:21.373528259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:21.374543 containerd[2134]: time="2025-11-08T00:07:21.374151707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:21.375455 kubelet[3406]: E1108 00:07:21.374732 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:21.375455 kubelet[3406]: E1108 00:07:21.374810 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:21.375455 kubelet[3406]: E1108 00:07:21.375013 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrpkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-bhc54_calico-apiserver(5611c66d-4585-41a1-9c50-eb23da03916c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:21.376613 kubelet[3406]: E1108 00:07:21.376307 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:07:21.533442 sshd[6260]: Accepted publickey for core from 139.178.89.65 port 40648 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:21.537319 sshd[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:21.546829 systemd-logind[2107]: New session 21 of user core. Nov 8 00:07:21.558297 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:07:21.820235 sshd[6260]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:21.833287 systemd-logind[2107]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:07:21.834218 systemd[1]: sshd@20-172.31.28.187:22-139.178.89.65:40648.service: Deactivated successfully. Nov 8 00:07:21.848138 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:07:21.859665 systemd-logind[2107]: Removed session 21. Nov 8 00:07:22.086047 containerd[2134]: time="2025-11-08T00:07:22.085739014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:07:22.375274 containerd[2134]: time="2025-11-08T00:07:22.375189719Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:22.376483 containerd[2134]: time="2025-11-08T00:07:22.376403507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:07:22.376671 containerd[2134]: time="2025-11-08T00:07:22.376550987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:22.376874 kubelet[3406]: E1108 00:07:22.376800 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:22.377631 kubelet[3406]: E1108 00:07:22.376875 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:22.377631 kubelet[3406]: E1108 00:07:22.377068 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twthk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd4d69d7c-ptmh4_calico-system(a118c8b1-dc8a-49b1-956e-fabb0c90510f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:22.378898 kubelet[3406]: E1108 00:07:22.378813 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:07:24.087037 kubelet[3406]: E1108 00:07:24.086918 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:07:25.083483 containerd[2134]: time="2025-11-08T00:07:25.083399005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:07:25.405389 containerd[2134]: time="2025-11-08T00:07:25.405292455Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:25.406630 containerd[2134]: time="2025-11-08T00:07:25.406543035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:07:25.406630 containerd[2134]: time="2025-11-08T00:07:25.406598223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:07:25.406960 kubelet[3406]: E1108 00:07:25.406832 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:25.406960 kubelet[3406]: E1108 00:07:25.406891 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:25.408016 kubelet[3406]: E1108 00:07:25.407052 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lp72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:25.411469 containerd[2134]: time="2025-11-08T00:07:25.411083367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:07:25.730671 containerd[2134]: time="2025-11-08T00:07:25.730499272Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:25.731757 containerd[2134]: time="2025-11-08T00:07:25.731684356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:07:25.731882 containerd[2134]: time="2025-11-08T00:07:25.731827744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:07:25.732137 kubelet[3406]: E1108 00:07:25.732081 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:25.732224 kubelet[3406]: E1108 00:07:25.732153 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:25.732392 kubelet[3406]: E1108 00:07:25.732317 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lp72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw22z_calico-system(5962793e-cd47-45ea-84d0-190de5cbdb54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:25.734138 kubelet[3406]: E1108 00:07:25.734065 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:07:26.859929 systemd[1]: Started sshd@21-172.31.28.187:22-139.178.89.65:49896.service - OpenSSH per-connection server daemon (139.178.89.65:49896). Nov 8 00:07:27.051969 sshd[6275]: Accepted publickey for core from 139.178.89.65 port 49896 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:27.054956 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:27.064872 systemd-logind[2107]: New session 22 of user core. Nov 8 00:07:27.070241 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:07:27.341918 sshd[6275]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:27.350074 systemd[1]: sshd@21-172.31.28.187:22-139.178.89.65:49896.service: Deactivated successfully. Nov 8 00:07:27.358657 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:07:27.362110 systemd-logind[2107]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:07:27.365311 systemd-logind[2107]: Removed session 22. Nov 8 00:07:31.084017 kubelet[3406]: E1108 00:07:31.083915 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:07:32.088352 kubelet[3406]: E1108 00:07:32.088071 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:07:32.381734 systemd[1]: Started sshd@22-172.31.28.187:22-139.178.89.65:49898.service - OpenSSH per-connection server daemon (139.178.89.65:49898). Nov 8 00:07:32.609756 sshd[6311]: Accepted publickey for core from 139.178.89.65 port 49898 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:32.614862 sshd[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:32.637048 systemd-logind[2107]: New session 23 of user core. Nov 8 00:07:32.645170 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:07:32.949189 sshd[6311]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:32.963803 systemd[1]: sshd@22-172.31.28.187:22-139.178.89.65:49898.service: Deactivated successfully. Nov 8 00:07:32.971037 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:07:32.973547 systemd-logind[2107]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:07:32.978410 systemd-logind[2107]: Removed session 23. Nov 8 00:07:33.083833 kubelet[3406]: E1108 00:07:33.083762 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:07:35.087335 kubelet[3406]: E1108 00:07:35.086811 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:07:36.089676 kubelet[3406]: E1108 00:07:36.089588 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:07:37.992001 systemd[1]: Started sshd@23-172.31.28.187:22-139.178.89.65:42758.service - OpenSSH per-connection server daemon (139.178.89.65:42758). Nov 8 00:07:38.120551 kubelet[3406]: E1108 00:07:38.119053 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:07:38.209621 sshd[6325]: Accepted publickey for core from 139.178.89.65 port 42758 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:38.218031 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:38.240904 systemd-logind[2107]: New session 24 of user core. Nov 8 00:07:38.249601 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:07:38.578620 sshd[6325]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:38.594962 systemd[1]: sshd@23-172.31.28.187:22-139.178.89.65:42758.service: Deactivated successfully. Nov 8 00:07:38.603366 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:07:38.606145 systemd-logind[2107]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:07:38.612496 systemd-logind[2107]: Removed session 24. Nov 8 00:07:41.673488 containerd[2134]: time="2025-11-08T00:07:41.673407895Z" level=info msg="StopPodSandbox for \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\"" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.766 [WARNING][6352] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5611c66d-4585-41a1-9c50-eb23da03916c", ResourceVersion:"1482", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed", Pod:"calico-apiserver-bc8bf555f-bhc54", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f79dc05451", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.769 [INFO][6352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.770 [INFO][6352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" iface="eth0" netns="" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.771 [INFO][6352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.771 [INFO][6352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.822 [INFO][6359] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.822 [INFO][6359] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.822 [INFO][6359] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.838 [WARNING][6359] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.838 [INFO][6359] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.842 [INFO][6359] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:41.856285 containerd[2134]: 2025-11-08 00:07:41.851 [INFO][6352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:41.857753 containerd[2134]: time="2025-11-08T00:07:41.856317632Z" level=info msg="TearDown network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\" successfully" Nov 8 00:07:41.857753 containerd[2134]: time="2025-11-08T00:07:41.856359284Z" level=info msg="StopPodSandbox for \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\" returns successfully" Nov 8 00:07:41.860059 containerd[2134]: time="2025-11-08T00:07:41.858966872Z" level=info msg="RemovePodSandbox for \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\"" Nov 8 00:07:41.860059 containerd[2134]: time="2025-11-08T00:07:41.859039892Z" level=info msg="Forcibly stopping sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\"" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:41.989 [WARNING][6373] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0", GenerateName:"calico-apiserver-bc8bf555f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5611c66d-4585-41a1-9c50-eb23da03916c", ResourceVersion:"1482", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bc8bf555f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"95bb6d57cce4ac2bed04f730f38b1e0fce1a9cff429238147c31620e7677f8ed", Pod:"calico-apiserver-bc8bf555f-bhc54", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f79dc05451", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:41.992 [INFO][6373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:41.992 [INFO][6373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" iface="eth0" netns="" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:41.992 [INFO][6373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:41.992 [INFO][6373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:42.046 [INFO][6380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:42.047 [INFO][6380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:42.048 [INFO][6380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:42.064 [WARNING][6380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:42.064 [INFO][6380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" HandleID="k8s-pod-network.2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Workload="ip--172--31--28--187-k8s-calico--apiserver--bc8bf555f--bhc54-eth0" Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:42.069 [INFO][6380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:42.089702 containerd[2134]: 2025-11-08 00:07:42.076 [INFO][6373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d" Nov 8 00:07:42.089702 containerd[2134]: time="2025-11-08T00:07:42.086125145Z" level=info msg="TearDown network for sandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\" successfully" Nov 8 00:07:42.103621 containerd[2134]: time="2025-11-08T00:07:42.099644741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:42.103621 containerd[2134]: time="2025-11-08T00:07:42.099751433Z" level=info msg="RemovePodSandbox \"2c3bb45ee9f12b87e3a609ea8b6541c8cd9a72a4111e59abc3de86b887d9062d\" returns successfully" Nov 8 00:07:42.103621 containerd[2134]: time="2025-11-08T00:07:42.101105321Z" level=info msg="StopPodSandbox for \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\"" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.303 [WARNING][6394] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5962793e-cd47-45ea-84d0-190de5cbdb54", ResourceVersion:"1512", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8", Pod:"csi-node-driver-tw22z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali246fe540a89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.304 [INFO][6394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.305 [INFO][6394] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" iface="eth0" netns="" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.305 [INFO][6394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.305 [INFO][6394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.388 [INFO][6402] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.388 [INFO][6402] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.389 [INFO][6402] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.403 [WARNING][6402] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.403 [INFO][6402] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.405 [INFO][6402] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:42.413437 containerd[2134]: 2025-11-08 00:07:42.410 [INFO][6394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.416126 containerd[2134]: time="2025-11-08T00:07:42.414550315Z" level=info msg="TearDown network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\" successfully" Nov 8 00:07:42.416126 containerd[2134]: time="2025-11-08T00:07:42.414704995Z" level=info msg="StopPodSandbox for \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\" returns successfully" Nov 8 00:07:42.416126 containerd[2134]: time="2025-11-08T00:07:42.415640131Z" level=info msg="RemovePodSandbox for \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\"" Nov 8 00:07:42.416126 containerd[2134]: time="2025-11-08T00:07:42.415687147Z" level=info msg="Forcibly stopping sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\"" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.505 [WARNING][6417] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5962793e-cd47-45ea-84d0-190de5cbdb54", ResourceVersion:"1512", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"177180b2d4b5b3b113291f500408b3cc4fcd06dce88068c807e7a4897033efd8", Pod:"csi-node-driver-tw22z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali246fe540a89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.506 [INFO][6417] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.506 [INFO][6417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" iface="eth0" netns="" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.506 [INFO][6417] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.506 [INFO][6417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.551 [INFO][6425] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.551 [INFO][6425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.551 [INFO][6425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.564 [WARNING][6425] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.564 [INFO][6425] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" HandleID="k8s-pod-network.05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Workload="ip--172--31--28--187-k8s-csi--node--driver--tw22z-eth0" Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.567 [INFO][6425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:42.574613 containerd[2134]: 2025-11-08 00:07:42.570 [INFO][6417] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298" Nov 8 00:07:42.574613 containerd[2134]: time="2025-11-08T00:07:42.573909308Z" level=info msg="TearDown network for sandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\" successfully" Nov 8 00:07:42.580797 containerd[2134]: time="2025-11-08T00:07:42.580546988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:42.580797 containerd[2134]: time="2025-11-08T00:07:42.580733816Z" level=info msg="RemovePodSandbox \"05b8c3f4fa6288d8a44812e6cc300644f76c40c449c49ab4197cdf6f8cfd1298\" returns successfully" Nov 8 00:07:42.582177 containerd[2134]: time="2025-11-08T00:07:42.581928056Z" level=info msg="StopPodSandbox for \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\"" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.658 [WARNING][6441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0", GenerateName:"calico-kube-controllers-7cd4d69d7c-", Namespace:"calico-system", SelfLink:"", UID:"a118c8b1-dc8a-49b1-956e-fabb0c90510f", ResourceVersion:"1494", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd4d69d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6", Pod:"calico-kube-controllers-7cd4d69d7c-ptmh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc988add759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.658 [INFO][6441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.658 [INFO][6441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" iface="eth0" netns="" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.659 [INFO][6441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.659 [INFO][6441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.721 [INFO][6448] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.721 [INFO][6448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.722 [INFO][6448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.735 [WARNING][6448] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.736 [INFO][6448] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.739 [INFO][6448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:42.749839 containerd[2134]: 2025-11-08 00:07:42.743 [INFO][6441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.749839 containerd[2134]: time="2025-11-08T00:07:42.747914925Z" level=info msg="TearDown network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\" successfully" Nov 8 00:07:42.749839 containerd[2134]: time="2025-11-08T00:07:42.747953553Z" level=info msg="StopPodSandbox for \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\" returns successfully" Nov 8 00:07:42.754731 containerd[2134]: time="2025-11-08T00:07:42.753191145Z" level=info msg="RemovePodSandbox for \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\"" Nov 8 00:07:42.754731 containerd[2134]: time="2025-11-08T00:07:42.753272841Z" level=info msg="Forcibly stopping sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\"" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.872 [WARNING][6462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0", GenerateName:"calico-kube-controllers-7cd4d69d7c-", Namespace:"calico-system", SelfLink:"", UID:"a118c8b1-dc8a-49b1-956e-fabb0c90510f", ResourceVersion:"1494", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd4d69d7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"91923fc64ec71725d40e47042e59a7a0503fc5993259bb5c12c37cc4261c84f6", Pod:"calico-kube-controllers-7cd4d69d7c-ptmh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc988add759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.873 [INFO][6462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.873 [INFO][6462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" iface="eth0" netns="" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.873 [INFO][6462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.873 [INFO][6462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.919 [INFO][6470] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.920 [INFO][6470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.920 [INFO][6470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.933 [WARNING][6470] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.933 [INFO][6470] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" HandleID="k8s-pod-network.4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Workload="ip--172--31--28--187-k8s-calico--kube--controllers--7cd4d69d7c--ptmh4-eth0" Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.937 [INFO][6470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:42.945701 containerd[2134]: 2025-11-08 00:07:42.941 [INFO][6462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea" Nov 8 00:07:42.945701 containerd[2134]: time="2025-11-08T00:07:42.945368890Z" level=info msg="TearDown network for sandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\" successfully" Nov 8 00:07:42.954322 containerd[2134]: time="2025-11-08T00:07:42.953498770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:42.954322 containerd[2134]: time="2025-11-08T00:07:42.953630266Z" level=info msg="RemovePodSandbox \"4b1d3f1eb4c314c313a97353128b7cd1a9a7cb700dc5ae795475b9e1570e82ea\" returns successfully" Nov 8 00:07:42.954548 containerd[2134]: time="2025-11-08T00:07:42.954382162Z" level=info msg="StopPodSandbox for \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\"" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.062 [WARNING][6484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"19e663c5-ada4-41f4-b329-6d803ea3d32d", ResourceVersion:"1477", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573", Pod:"goldmane-666569f655-qnsdl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliadd8ceeaf7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.062 [INFO][6484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.062 [INFO][6484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" iface="eth0" netns="" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.062 [INFO][6484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.063 [INFO][6484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.115 [INFO][6492] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.116 [INFO][6492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.116 [INFO][6492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.132 [WARNING][6492] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.132 [INFO][6492] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.134 [INFO][6492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:43.143141 containerd[2134]: 2025-11-08 00:07:43.137 [INFO][6484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.144460 containerd[2134]: time="2025-11-08T00:07:43.144186847Z" level=info msg="TearDown network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\" successfully" Nov 8 00:07:43.144460 containerd[2134]: time="2025-11-08T00:07:43.144238783Z" level=info msg="StopPodSandbox for \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\" returns successfully" Nov 8 00:07:43.146373 containerd[2134]: time="2025-11-08T00:07:43.146233591Z" level=info msg="RemovePodSandbox for \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\"" Nov 8 00:07:43.146820 containerd[2134]: time="2025-11-08T00:07:43.146338339Z" level=info msg="Forcibly stopping sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\"" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.254 [WARNING][6506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"19e663c5-ada4-41f4-b329-6d803ea3d32d", ResourceVersion:"1477", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-187", ContainerID:"6dc996edf7fcecd63b912f8d4081452a2f029492ee964a0d4c29bdffe79b9573", Pod:"goldmane-666569f655-qnsdl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliadd8ceeaf7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.255 [INFO][6506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.255 [INFO][6506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" iface="eth0" netns="" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.255 [INFO][6506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.256 [INFO][6506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.333 [INFO][6514] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.333 [INFO][6514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.335 [INFO][6514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.359 [WARNING][6514] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.359 [INFO][6514] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" HandleID="k8s-pod-network.14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Workload="ip--172--31--28--187-k8s-goldmane--666569f655--qnsdl-eth0" Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.363 [INFO][6514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:07:43.374346 containerd[2134]: 2025-11-08 00:07:43.368 [INFO][6506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366" Nov 8 00:07:43.378402 containerd[2134]: time="2025-11-08T00:07:43.375824576Z" level=info msg="TearDown network for sandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\" successfully" Nov 8 00:07:43.389004 containerd[2134]: time="2025-11-08T00:07:43.388702688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:07:43.389004 containerd[2134]: time="2025-11-08T00:07:43.388794464Z" level=info msg="RemovePodSandbox \"14a1598011a94b1ca81f2a89c3a015fe1c892256ce8b0e3bd0ecd88942207366\" returns successfully" Nov 8 00:07:43.622193 systemd[1]: Started sshd@24-172.31.28.187:22-139.178.89.65:42772.service - OpenSSH per-connection server daemon (139.178.89.65:42772). Nov 8 00:07:43.837988 sshd[6520]: Accepted publickey for core from 139.178.89.65 port 42772 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:43.841726 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:43.857502 systemd-logind[2107]: New session 25 of user core. Nov 8 00:07:43.865814 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:07:44.185517 sshd[6520]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:44.209046 systemd[1]: sshd@24-172.31.28.187:22-139.178.89.65:42772.service: Deactivated successfully. Nov 8 00:07:44.220161 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:07:44.223681 systemd-logind[2107]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:07:44.227536 systemd-logind[2107]: Removed session 25. Nov 8 00:07:46.085076 kubelet[3406]: E1108 00:07:46.084364 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:07:46.089609 kubelet[3406]: E1108 00:07:46.089157 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:07:47.083354 kubelet[3406]: E1108 00:07:47.082828 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:07:49.086329 kubelet[3406]: E1108 00:07:49.086096 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:07:49.216542 systemd[1]: Started sshd@25-172.31.28.187:22-139.178.89.65:42792.service - OpenSSH per-connection server daemon (139.178.89.65:42792). Nov 8 00:07:49.416921 sshd[6536]: Accepted publickey for core from 139.178.89.65 port 42792 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:49.419904 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:49.431671 systemd-logind[2107]: New session 26 of user core. Nov 8 00:07:49.442942 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:07:49.933892 sshd[6536]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:49.944304 systemd-logind[2107]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:07:49.950436 systemd[1]: sshd@25-172.31.28.187:22-139.178.89.65:42792.service: Deactivated successfully. Nov 8 00:07:49.959702 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:07:49.964310 systemd-logind[2107]: Removed session 26. Nov 8 00:07:50.087391 kubelet[3406]: E1108 00:07:50.087200 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:07:50.091926 kubelet[3406]: E1108 00:07:50.091801 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:08:00.083832 kubelet[3406]: E1108 00:08:00.083699 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c" Nov 8 00:08:01.083450 containerd[2134]: time="2025-11-08T00:08:01.083135988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:01.352370 containerd[2134]: time="2025-11-08T00:08:01.352190653Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:01.354030 containerd[2134]: time="2025-11-08T00:08:01.353972617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:01.354432 containerd[2134]: time="2025-11-08T00:08:01.354083953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:01.354730 kubelet[3406]: E1108 00:08:01.354609 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:01.354730 kubelet[3406]: E1108 00:08:01.354671 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:01.356137 kubelet[3406]: E1108 00:08:01.355047 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fj5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-2vp5h_calico-apiserver(e2b4786b-bdcd-41e2-8651-d03da4e624c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:01.356355 containerd[2134]: time="2025-11-08T00:08:01.355109353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:08:01.356707 kubelet[3406]: E1108 00:08:01.356643 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-2vp5h" podUID="e2b4786b-bdcd-41e2-8651-d03da4e624c0" Nov 8 00:08:01.711269 containerd[2134]: time="2025-11-08T00:08:01.711137055Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:01.712963 containerd[2134]: time="2025-11-08T00:08:01.712809195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:08:01.712963 containerd[2134]: time="2025-11-08T00:08:01.712894875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:01.713305 kubelet[3406]: E1108 00:08:01.713112 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:08:01.713305 kubelet[3406]: E1108 00:08:01.713175 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:08:01.713523 kubelet[3406]: E1108 00:08:01.713381 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjq8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qnsdl_calico-system(19e663c5-ada4-41f4-b329-6d803ea3d32d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:01.715330 kubelet[3406]: E1108 00:08:01.715278 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnsdl" podUID="19e663c5-ada4-41f4-b329-6d803ea3d32d" Nov 8 00:08:02.085462 kubelet[3406]: E1108 00:08:02.085290 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw22z" podUID="5962793e-cd47-45ea-84d0-190de5cbdb54" Nov 8 00:08:03.083133 containerd[2134]: time="2025-11-08T00:08:03.082792466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:08:03.348468 containerd[2134]: time="2025-11-08T00:08:03.348290451Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:03.350219 containerd[2134]: time="2025-11-08T00:08:03.350152815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:08:03.350381 containerd[2134]: time="2025-11-08T00:08:03.350302131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:08:03.350635 kubelet[3406]: E1108 00:08:03.350544 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:08:03.351262 kubelet[3406]: E1108 00:08:03.350643 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:08:03.351262 kubelet[3406]: E1108 00:08:03.350909 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad073e8bb50749a3ae91e94ed2b29ac5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:03.351508 containerd[2134]: time="2025-11-08T00:08:03.351292611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:08:03.639891 containerd[2134]: time="2025-11-08T00:08:03.639781840Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:03.641917 containerd[2134]: time="2025-11-08T00:08:03.641838760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:08:03.642023 containerd[2134]: time="2025-11-08T00:08:03.641987728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:08:03.642361 kubelet[3406]: E1108 00:08:03.642292 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:08:03.642456 kubelet[3406]: E1108 00:08:03.642365 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:08:03.643067 containerd[2134]: time="2025-11-08T00:08:03.643007524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:08:03.643516 kubelet[3406]: E1108 00:08:03.643417 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twthk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd4d69d7c-ptmh4_calico-system(a118c8b1-dc8a-49b1-956e-fabb0c90510f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:03.645629 kubelet[3406]: E1108 00:08:03.644862 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd4d69d7c-ptmh4" podUID="a118c8b1-dc8a-49b1-956e-fabb0c90510f" Nov 8 00:08:03.875251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96a78d9ff47bc26dfe16225ec7f7ca53c44f28713bebdc2dbc8495a94f897dc1-rootfs.mount: Deactivated successfully. Nov 8 00:08:03.889400 containerd[2134]: time="2025-11-08T00:08:03.889044630Z" level=info msg="shim disconnected" id=96a78d9ff47bc26dfe16225ec7f7ca53c44f28713bebdc2dbc8495a94f897dc1 namespace=k8s.io Nov 8 00:08:03.889400 containerd[2134]: time="2025-11-08T00:08:03.889228518Z" level=warning msg="cleaning up after shim disconnected" id=96a78d9ff47bc26dfe16225ec7f7ca53c44f28713bebdc2dbc8495a94f897dc1 namespace=k8s.io Nov 8 00:08:03.889400 containerd[2134]: time="2025-11-08T00:08:03.889250610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:08:03.899523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f51fe1f4aca9cf135085e8172204f5d0ac21a91b3a75b91a52e5fc2989fc44a-rootfs.mount: Deactivated successfully. Nov 8 00:08:03.905796 containerd[2134]: time="2025-11-08T00:08:03.904947198Z" level=info msg="shim disconnected" id=0f51fe1f4aca9cf135085e8172204f5d0ac21a91b3a75b91a52e5fc2989fc44a namespace=k8s.io Nov 8 00:08:03.905796 containerd[2134]: time="2025-11-08T00:08:03.905029854Z" level=warning msg="cleaning up after shim disconnected" id=0f51fe1f4aca9cf135085e8172204f5d0ac21a91b3a75b91a52e5fc2989fc44a namespace=k8s.io Nov 8 00:08:03.905796 containerd[2134]: time="2025-11-08T00:08:03.905053710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:08:03.918193 containerd[2134]: time="2025-11-08T00:08:03.918013290Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:03.921078 containerd[2134]: time="2025-11-08T00:08:03.920844966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:08:03.921078 containerd[2134]: time="2025-11-08T00:08:03.921004086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:08:03.922832 kubelet[3406]: E1108 00:08:03.921461 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:08:03.922832 kubelet[3406]: E1108 00:08:03.921722 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:08:03.923180 kubelet[3406]: E1108 00:08:03.922628 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzh5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8465bf669f-f6zwz_calico-system(b7a04fa0-10c9-4b7a-b022-1e4b716cfc44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:03.924394 kubelet[3406]: E1108 00:08:03.924328 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8465bf669f-f6zwz" podUID="b7a04fa0-10c9-4b7a-b022-1e4b716cfc44" Nov 8 00:08:03.936622 containerd[2134]: time="2025-11-08T00:08:03.936417774Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:08:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:08:04.198497 kubelet[3406]: I1108 00:08:04.196910 3406 scope.go:117] "RemoveContainer" containerID="96a78d9ff47bc26dfe16225ec7f7ca53c44f28713bebdc2dbc8495a94f897dc1" Nov 8 00:08:04.203246 kubelet[3406]: I1108 00:08:04.203198 3406 scope.go:117] "RemoveContainer" containerID="0f51fe1f4aca9cf135085e8172204f5d0ac21a91b3a75b91a52e5fc2989fc44a" Nov 8 00:08:04.204254 containerd[2134]: time="2025-11-08T00:08:04.204197907Z" level=info msg="CreateContainer within sandbox \"d8f297df3162929d321336c0655350ba31279e7fb58916bf3ca1ba271c6549a8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:08:04.209011 containerd[2134]: time="2025-11-08T00:08:04.208777395Z" level=info msg="CreateContainer within sandbox \"ab37493301114debfa5c6983e69a998cb114b993b2820dafbf04c9184ed722ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:08:04.238450 containerd[2134]: time="2025-11-08T00:08:04.235163739Z" level=info msg="CreateContainer within sandbox \"d8f297df3162929d321336c0655350ba31279e7fb58916bf3ca1ba271c6549a8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4012f037c38e0c1d570fe6273bcedd5ad49c8258570575314578b8bb3f459e08\"" Nov 8 00:08:04.240978 containerd[2134]: time="2025-11-08T00:08:04.240808827Z" level=info msg="StartContainer for \"4012f037c38e0c1d570fe6273bcedd5ad49c8258570575314578b8bb3f459e08\"" Nov 8 00:08:04.252310 containerd[2134]: time="2025-11-08T00:08:04.252228495Z" level=info msg="CreateContainer within sandbox \"ab37493301114debfa5c6983e69a998cb114b993b2820dafbf04c9184ed722ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a601e534d70f45e7bfe68404e4d7cd909631e5d862980625ed1fdf85921ebf8e\"" Nov 8 00:08:04.254498 containerd[2134]: time="2025-11-08T00:08:04.254445063Z" level=info msg="StartContainer for \"a601e534d70f45e7bfe68404e4d7cd909631e5d862980625ed1fdf85921ebf8e\"" Nov 8 00:08:04.389353 containerd[2134]: time="2025-11-08T00:08:04.389280292Z" level=info msg="StartContainer for \"4012f037c38e0c1d570fe6273bcedd5ad49c8258570575314578b8bb3f459e08\" returns successfully" Nov 8 00:08:04.427597 containerd[2134]: time="2025-11-08T00:08:04.426641440Z" level=info msg="StartContainer for \"a601e534d70f45e7bfe68404e4d7cd909631e5d862980625ed1fdf85921ebf8e\" returns successfully" Nov 8 00:08:08.422709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-773419e36f5d69778e680842e338e39b2bd7ba797cb58354c330f1a400f82e3b-rootfs.mount: Deactivated successfully. Nov 8 00:08:08.435528 containerd[2134]: time="2025-11-08T00:08:08.435229664Z" level=info msg="shim disconnected" id=773419e36f5d69778e680842e338e39b2bd7ba797cb58354c330f1a400f82e3b namespace=k8s.io Nov 8 00:08:08.435528 containerd[2134]: time="2025-11-08T00:08:08.435332648Z" level=warning msg="cleaning up after shim disconnected" id=773419e36f5d69778e680842e338e39b2bd7ba797cb58354c330f1a400f82e3b namespace=k8s.io Nov 8 00:08:08.435528 containerd[2134]: time="2025-11-08T00:08:08.435354572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:08:09.234584 kubelet[3406]: I1108 00:08:09.234521 3406 scope.go:117] "RemoveContainer" containerID="773419e36f5d69778e680842e338e39b2bd7ba797cb58354c330f1a400f82e3b" Nov 8 00:08:09.238210 containerd[2134]: time="2025-11-08T00:08:09.238150280Z" level=info msg="CreateContainer within sandbox \"145a444fd072d39f3615a6bc889f37ea2026788e24ac650b346375114033e3bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:08:09.266976 containerd[2134]: time="2025-11-08T00:08:09.266802128Z" level=info msg="CreateContainer within sandbox \"145a444fd072d39f3615a6bc889f37ea2026788e24ac650b346375114033e3bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e05e5f1a92f7cd4515ed87faf36435d08274193c73f32044c549350bb0d17982\"" Nov 8 00:08:09.267584 containerd[2134]: time="2025-11-08T00:08:09.267498932Z" level=info msg="StartContainer for \"e05e5f1a92f7cd4515ed87faf36435d08274193c73f32044c549350bb0d17982\"" Nov 8 00:08:09.393979 containerd[2134]: time="2025-11-08T00:08:09.393879381Z" level=info msg="StartContainer for \"e05e5f1a92f7cd4515ed87faf36435d08274193c73f32044c549350bb0d17982\" returns successfully" Nov 8 00:08:11.082959 kubelet[3406]: E1108 00:08:11.082893 3406 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.187:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-187?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:08:13.082776 containerd[2134]: time="2025-11-08T00:08:13.082699907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:08:13.350187 containerd[2134]: time="2025-11-08T00:08:13.349886053Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:08:13.352329 containerd[2134]: time="2025-11-08T00:08:13.352179253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:08:13.352329 containerd[2134]: time="2025-11-08T00:08:13.352263169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:08:13.352644 kubelet[3406]: E1108 00:08:13.352453 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:13.352644 kubelet[3406]: E1108 00:08:13.352518 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:08:13.353260 kubelet[3406]: E1108 00:08:13.352740 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrpkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bc8bf555f-bhc54_calico-apiserver(5611c66d-4585-41a1-9c50-eb23da03916c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:08:13.354113 kubelet[3406]: E1108 00:08:13.354034 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bc8bf555f-bhc54" podUID="5611c66d-4585-41a1-9c50-eb23da03916c"