Nov 8 00:03:53.276601 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 8 00:03:53.276660 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:03:53.276691 kernel: KASLR disabled due to lack of seed Nov 8 00:03:53.276710 kernel: efi: EFI v2.7 by EDK II Nov 8 00:03:53.276726 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Nov 8 00:03:53.276742 kernel: ACPI: Early table checksum verification disabled Nov 8 00:03:53.276762 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 8 00:03:53.276780 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 8 00:03:53.276798 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 8 00:03:53.276814 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 8 00:03:53.276837 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 8 00:03:53.276854 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 8 00:03:53.276871 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 8 00:03:53.276887 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 8 00:03:53.276907 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 8 00:03:53.276929 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 8 00:03:53.276947 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 8 00:03:53.276966 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 8 00:03:53.276984 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 8 00:03:53.277002 kernel: printk: bootconsole [uart0] enabled Nov 8 00:03:53.277019 kernel: NUMA: Failed to initialise from firmware Nov 8 00:03:53.277038 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 8 00:03:53.277055 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Nov 8 00:03:53.277073 kernel: Zone ranges: Nov 8 00:03:53.277091 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 8 00:03:53.277108 kernel: DMA32 empty Nov 8 00:03:53.277133 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 8 00:03:53.277151 kernel: Movable zone start for each node Nov 8 00:03:53.277168 kernel: Early memory node ranges Nov 8 00:03:53.277185 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 8 00:03:53.277202 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 8 00:03:53.277219 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 8 00:03:53.277236 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 8 00:03:53.277253 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 8 00:03:53.277270 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 8 00:03:53.277286 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 8 00:03:53.277304 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 8 00:03:53.277321 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 8 00:03:53.277344 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 8 00:03:53.277362 kernel: psci: probing for conduit method from ACPI. Nov 8 00:03:53.277386 kernel: psci: PSCIv1.0 detected in firmware. Nov 8 00:03:53.277404 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:03:53.277422 kernel: psci: Trusted OS migration not required Nov 8 00:03:53.277445 kernel: psci: SMC Calling Convention v1.1 Nov 8 00:03:53.277464 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 8 00:03:53.277482 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:03:53.277501 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:03:53.277519 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:03:53.277537 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:03:53.277556 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:03:53.277966 kernel: CPU features: detected: Spectre-v2 Nov 8 00:03:53.277993 kernel: CPU features: detected: Spectre-v3a Nov 8 00:03:53.278012 kernel: CPU features: detected: Spectre-BHB Nov 8 00:03:53.278031 kernel: CPU features: detected: ARM erratum 1742098 Nov 8 00:03:53.278061 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 8 00:03:53.278080 kernel: alternatives: applying boot alternatives Nov 8 00:03:53.278102 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:03:53.278121 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:03:53.278140 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:03:53.278158 kernel: Fallback order for Node 0: 0 Nov 8 00:03:53.278176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Nov 8 00:03:53.278194 kernel: Policy zone: Normal Nov 8 00:03:53.278212 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:03:53.278230 kernel: software IO TLB: area num 2. Nov 8 00:03:53.278248 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Nov 8 00:03:53.278275 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Nov 8 00:03:53.278294 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:03:53.278312 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:03:53.278331 kernel: rcu: RCU event tracing is enabled. Nov 8 00:03:53.278351 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:03:53.278369 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:03:53.278388 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:03:53.278406 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:03:53.278424 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:03:53.278443 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:03:53.278462 kernel: GICv3: 96 SPIs implemented Nov 8 00:03:53.278487 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:03:53.278506 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:03:53.278524 kernel: GICv3: GICv3 features: 16 PPIs Nov 8 00:03:53.278543 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 8 00:03:53.278562 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 8 00:03:53.279685 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Nov 8 00:03:53.279709 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Nov 8 00:03:53.279730 kernel: GICv3: using LPI property table @0x00000004000d0000 Nov 8 00:03:53.279748 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 8 00:03:53.279767 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Nov 8 00:03:53.279786 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:03:53.279804 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 8 00:03:53.279834 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 8 00:03:53.279853 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 8 00:03:53.279872 kernel: Console: colour dummy device 80x25 Nov 8 00:03:53.279890 kernel: printk: console [tty1] enabled Nov 8 00:03:53.279909 kernel: ACPI: Core revision 20230628 Nov 8 00:03:53.279927 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 8 00:03:53.279946 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:03:53.279964 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:03:53.279982 kernel: landlock: Up and running. Nov 8 00:03:53.280005 kernel: SELinux: Initializing. Nov 8 00:03:53.280024 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:03:53.280042 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:03:53.280061 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:03:53.280080 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:03:53.280100 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:03:53.280119 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:03:53.280138 kernel: Platform MSI: ITS@0x10080000 domain created Nov 8 00:03:53.280156 kernel: PCI/MSI: ITS@0x10080000 domain created Nov 8 00:03:53.280181 kernel: Remapping and enabling EFI services. Nov 8 00:03:53.280199 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:03:53.280217 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:03:53.280236 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 8 00:03:53.280254 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Nov 8 00:03:53.280273 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 8 00:03:53.280291 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:03:53.280310 kernel: SMP: Total of 2 processors activated. Nov 8 00:03:53.280328 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:03:53.280352 kernel: CPU features: detected: 32-bit EL1 Support Nov 8 00:03:53.280370 kernel: CPU features: detected: CRC32 instructions Nov 8 00:03:53.280389 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:03:53.280420 kernel: alternatives: applying system-wide alternatives Nov 8 00:03:53.280443 kernel: devtmpfs: initialized Nov 8 00:03:53.280462 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:03:53.280509 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:03:53.280529 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:03:53.280549 kernel: SMBIOS 3.0.0 present. Nov 8 00:03:53.280613 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 8 00:03:53.280634 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:03:53.280653 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:03:53.280672 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:03:53.280691 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:03:53.280710 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:03:53.280730 kernel: audit: type=2000 audit(0.308:1): state=initialized audit_enabled=0 res=1 Nov 8 00:03:53.280748 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:03:53.280774 kernel: cpuidle: using governor menu Nov 8 00:03:53.280793 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:03:53.280812 kernel: ASID allocator initialised with 65536 entries Nov 8 00:03:53.280831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:03:53.280850 kernel: Serial: AMBA PL011 UART driver Nov 8 00:03:53.280869 kernel: Modules: 17488 pages in range for non-PLT usage Nov 8 00:03:53.280888 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:03:53.280907 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:03:53.280926 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:03:53.280950 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:03:53.280969 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:03:53.280988 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:03:53.281007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:03:53.281026 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:03:53.281045 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:03:53.281065 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:03:53.281084 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:03:53.281103 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:03:53.281127 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:03:53.281146 kernel: ACPI: Interpreter enabled Nov 8 00:03:53.281164 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:03:53.281183 kernel: ACPI: MCFG table detected, 1 entries Nov 8 00:03:53.281202 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 8 00:03:53.281546 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:03:53.281793 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 00:03:53.282010 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 00:03:53.282224 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 8 00:03:53.282445 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 8 00:03:53.282474 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 8 00:03:53.282495 kernel: acpiphp: Slot [1] registered Nov 8 00:03:53.282515 kernel: acpiphp: Slot [2] registered Nov 8 00:03:53.282534 kernel: acpiphp: Slot [3] registered Nov 8 00:03:53.282554 kernel: acpiphp: Slot [4] registered Nov 8 00:03:53.282615 kernel: acpiphp: Slot [5] registered Nov 8 00:03:53.282647 kernel: acpiphp: Slot [6] registered Nov 8 00:03:53.282666 kernel: acpiphp: Slot [7] registered Nov 8 00:03:53.282685 kernel: acpiphp: Slot [8] registered Nov 8 00:03:53.282704 kernel: acpiphp: Slot [9] registered Nov 8 00:03:53.282723 kernel: acpiphp: Slot [10] registered Nov 8 00:03:53.282742 kernel: acpiphp: Slot [11] registered Nov 8 00:03:53.282761 kernel: acpiphp: Slot [12] registered Nov 8 00:03:53.282781 kernel: acpiphp: Slot [13] registered Nov 8 00:03:53.282800 kernel: acpiphp: Slot [14] registered Nov 8 00:03:53.282819 kernel: acpiphp: Slot [15] registered Nov 8 00:03:53.282842 kernel: acpiphp: Slot [16] registered Nov 8 00:03:53.282861 kernel: acpiphp: Slot [17] registered Nov 8 00:03:53.282880 kernel: acpiphp: Slot [18] registered Nov 8 00:03:53.282899 kernel: acpiphp: Slot [19] registered Nov 8 00:03:53.282918 kernel: acpiphp: Slot [20] registered Nov 8 00:03:53.282936 kernel: acpiphp: Slot [21] registered Nov 8 00:03:53.282955 kernel: acpiphp: Slot [22] registered Nov 8 00:03:53.282974 kernel: acpiphp: Slot [23] registered Nov 8 00:03:53.282993 kernel: acpiphp: Slot [24] registered Nov 8 00:03:53.283016 kernel: acpiphp: Slot [25] registered Nov 8 00:03:53.283035 kernel: acpiphp: Slot [26] registered Nov 8 00:03:53.283055 kernel: acpiphp: Slot [27] registered Nov 8 00:03:53.283074 kernel: acpiphp: Slot [28] registered Nov 8 00:03:53.283092 kernel: acpiphp: Slot [29] registered Nov 8 00:03:53.283111 kernel: acpiphp: Slot [30] registered Nov 8 00:03:53.283131 kernel: acpiphp: Slot [31] registered Nov 8 00:03:53.283150 kernel: PCI host bridge to bus 0000:00 Nov 8 00:03:53.283417 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 8 00:03:53.284812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 8 00:03:53.285041 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 8 00:03:53.285241 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 8 00:03:53.285496 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Nov 8 00:03:53.285818 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Nov 8 00:03:53.286068 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Nov 8 00:03:53.286392 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 8 00:03:53.288969 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Nov 8 00:03:53.289255 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:03:53.289484 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 8 00:03:53.289744 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Nov 8 00:03:53.289952 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Nov 8 00:03:53.290154 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Nov 8 00:03:53.290366 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:03:53.299159 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Nov 8 00:03:53.299454 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Nov 8 00:03:53.303757 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Nov 8 00:03:53.304000 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Nov 8 00:03:53.304229 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Nov 8 00:03:53.304429 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 8 00:03:53.306474 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 8 00:03:53.306784 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 8 00:03:53.306820 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 8 00:03:53.306842 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 8 00:03:53.306867 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 8 00:03:53.306887 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 8 00:03:53.306907 kernel: iommu: Default domain type: Translated Nov 8 00:03:53.306927 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:03:53.306961 kernel: efivars: Registered efivars operations Nov 8 00:03:53.306982 kernel: vgaarb: loaded Nov 8 00:03:53.307001 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:03:53.307021 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:03:53.307041 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:03:53.307061 kernel: pnp: PnP ACPI init Nov 8 00:03:53.307337 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 8 00:03:53.307373 kernel: pnp: PnP ACPI: found 1 devices Nov 8 00:03:53.307393 kernel: NET: Registered PF_INET protocol family Nov 8 00:03:53.307423 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:03:53.307444 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:03:53.307464 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:03:53.307483 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:03:53.307503 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:03:53.307523 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:03:53.307542 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:03:53.307562 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:03:53.307640 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:03:53.307673 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:03:53.307696 kernel: kvm [1]: HYP mode not available Nov 8 00:03:53.307716 kernel: Initialise system trusted keyrings Nov 8 00:03:53.307738 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:03:53.307759 kernel: Key type asymmetric registered Nov 8 00:03:53.307779 kernel: Asymmetric key parser 'x509' registered Nov 8 00:03:53.307801 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:03:53.307821 kernel: io scheduler mq-deadline registered Nov 8 00:03:53.307843 kernel: io scheduler kyber registered Nov 8 00:03:53.307873 kernel: io scheduler bfq registered Nov 8 00:03:53.308225 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 8 00:03:53.308269 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 8 00:03:53.308291 kernel: ACPI: button: Power Button [PWRB] Nov 8 00:03:53.308312 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 8 00:03:53.308333 kernel: ACPI: button: Sleep Button [SLPB] Nov 8 00:03:53.308353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:03:53.308376 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 8 00:03:53.310807 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 8 00:03:53.310855 kernel: printk: console [ttyS0] disabled Nov 8 00:03:53.310876 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 8 00:03:53.310896 kernel: printk: console [ttyS0] enabled Nov 8 00:03:53.310916 kernel: printk: bootconsole [uart0] disabled Nov 8 00:03:53.310934 kernel: thunder_xcv, ver 1.0 Nov 8 00:03:53.310953 kernel: thunder_bgx, ver 1.0 Nov 8 00:03:53.310972 kernel: nicpf, ver 1.0 Nov 8 00:03:53.310991 kernel: nicvf, ver 1.0 Nov 8 00:03:53.311232 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:03:53.311431 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:03:52 UTC (1762560232) Nov 8 00:03:53.311457 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:03:53.311477 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Nov 8 00:03:53.311496 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:03:53.311515 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:03:53.311534 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:03:53.311553 kernel: Segment Routing with IPv6 Nov 8 00:03:53.311602 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:03:53.311624 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:03:53.311643 kernel: Key type dns_resolver registered Nov 8 00:03:53.311662 kernel: registered taskstats version 1 Nov 8 00:03:53.311682 kernel: Loading compiled-in X.509 certificates Nov 8 00:03:53.311701 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:03:53.311720 kernel: Key type .fscrypt registered Nov 8 00:03:53.311738 kernel: Key type fscrypt-provisioning registered Nov 8 00:03:53.311757 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:03:53.311781 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:03:53.311800 kernel: ima: No architecture policies found Nov 8 00:03:53.311819 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:03:53.311838 kernel: clk: Disabling unused clocks Nov 8 00:03:53.311856 kernel: Freeing unused kernel memory: 39424K Nov 8 00:03:53.311875 kernel: Run /init as init process Nov 8 00:03:53.311894 kernel: with arguments: Nov 8 00:03:53.311912 kernel: /init Nov 8 00:03:53.311930 kernel: with environment: Nov 8 00:03:53.311949 kernel: HOME=/ Nov 8 00:03:53.311973 kernel: TERM=linux Nov 8 00:03:53.311996 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:03:53.312020 systemd[1]: Detected virtualization amazon. Nov 8 00:03:53.312041 systemd[1]: Detected architecture arm64. Nov 8 00:03:53.312061 systemd[1]: Running in initrd. Nov 8 00:03:53.312081 systemd[1]: No hostname configured, using default hostname. Nov 8 00:03:53.312100 systemd[1]: Hostname set to . Nov 8 00:03:53.312126 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:03:53.312147 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:03:53.312167 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:03:53.312188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:03:53.312210 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:03:53.312231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:03:53.312253 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:03:53.312274 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:03:53.312302 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:03:53.312323 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:03:53.312344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:03:53.312365 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:03:53.312386 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:03:53.312406 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:03:53.312427 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:03:53.312452 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:03:53.312494 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:03:53.312518 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:03:53.312540 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:03:53.312561 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:03:53.314811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:03:53.314836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:03:53.314858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:03:53.314890 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:03:53.314912 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:03:53.314933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:03:53.314954 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:03:53.314975 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:03:53.314996 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:03:53.315016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:03:53.315038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:03:53.315123 systemd-journald[252]: Collecting audit messages is disabled. Nov 8 00:03:53.315175 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:03:53.315197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:03:53.315218 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:03:53.315246 systemd-journald[252]: Journal started Nov 8 00:03:53.315286 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2e4fdbb5130f65b892fd04b34f7aa5) is 8.0M, max 75.3M, 67.3M free. Nov 8 00:03:53.318059 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:03:53.289190 systemd-modules-load[253]: Inserted module 'overlay' Nov 8 00:03:53.330627 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:03:53.337649 kernel: Bridge firewalling registered Nov 8 00:03:53.337067 systemd-modules-load[253]: Inserted module 'br_netfilter' Nov 8 00:03:53.350597 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:03:53.356333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:03:53.368633 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:03:53.374783 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:03:53.389133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:03:53.393816 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:03:53.405560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:03:53.419038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:03:53.449676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:03:53.454234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:03:53.474921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:03:53.486096 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:03:53.493093 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:03:53.504916 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:03:53.532024 dracut-cmdline[287]: dracut-dracut-053 Nov 8 00:03:53.538648 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:03:53.592773 systemd-resolved[290]: Positive Trust Anchors: Nov 8 00:03:53.592810 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:03:53.592873 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:03:53.695590 kernel: SCSI subsystem initialized Nov 8 00:03:53.701609 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:03:53.713604 kernel: iscsi: registered transport (tcp) Nov 8 00:03:53.736291 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:03:53.736377 kernel: QLogic iSCSI HBA Driver Nov 8 00:03:53.825600 kernel: random: crng init done Nov 8 00:03:53.826046 systemd-resolved[290]: Defaulting to hostname 'linux'. Nov 8 00:03:53.830316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:03:53.839352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:03:53.864692 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:03:53.876065 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:03:53.913617 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:03:53.916823 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:03:53.916902 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:03:53.986658 kernel: raid6: neonx8 gen() 6599 MB/s Nov 8 00:03:54.003621 kernel: raid6: neonx4 gen() 6437 MB/s Nov 8 00:03:54.020613 kernel: raid6: neonx2 gen() 5390 MB/s Nov 8 00:03:54.037602 kernel: raid6: neonx1 gen() 3936 MB/s Nov 8 00:03:54.054601 kernel: raid6: int64x8 gen() 3799 MB/s Nov 8 00:03:54.071600 kernel: raid6: int64x4 gen() 3688 MB/s Nov 8 00:03:54.088600 kernel: raid6: int64x2 gen() 3563 MB/s Nov 8 00:03:54.106605 kernel: raid6: int64x1 gen() 2771 MB/s Nov 8 00:03:54.106649 kernel: raid6: using algorithm neonx8 gen() 6599 MB/s Nov 8 00:03:54.125612 kernel: raid6: .... xor() 4850 MB/s, rmw enabled Nov 8 00:03:54.125649 kernel: raid6: using neon recovery algorithm Nov 8 00:03:54.133604 kernel: xor: measuring software checksum speed Nov 8 00:03:54.135831 kernel: 8regs : 10280 MB/sec Nov 8 00:03:54.135863 kernel: 32regs : 11913 MB/sec Nov 8 00:03:54.137110 kernel: arm64_neon : 9568 MB/sec Nov 8 00:03:54.137142 kernel: xor: using function: 32regs (11913 MB/sec) Nov 8 00:03:54.221990 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:03:54.240610 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:03:54.250906 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:03:54.295160 systemd-udevd[472]: Using default interface naming scheme 'v255'. Nov 8 00:03:54.303971 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:03:54.326963 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:03:54.355624 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Nov 8 00:03:54.413771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:03:54.424992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:03:54.542622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:03:54.557190 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:03:54.602935 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:03:54.607811 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:03:54.608343 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:03:54.611032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:03:54.626606 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:03:54.660908 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:03:54.733627 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 8 00:03:54.733722 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 8 00:03:54.743143 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 8 00:03:54.743530 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 8 00:03:54.780106 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c3:11:9c:85:6d Nov 8 00:03:54.773775 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:03:54.774036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:03:54.777310 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:03:54.779911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:03:54.780209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:03:54.783109 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:03:54.794206 (udev-worker)[546]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:03:54.803321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:03:54.853986 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 8 00:03:54.856899 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 8 00:03:54.859726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:03:54.871015 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:03:54.878234 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 8 00:03:54.887557 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:03:54.887663 kernel: GPT:9289727 != 33554431 Nov 8 00:03:54.887690 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:03:54.887716 kernel: GPT:9289727 != 33554431 Nov 8 00:03:54.887755 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:03:54.888778 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:03:54.917817 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:03:54.997617 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (540) Nov 8 00:03:55.016607 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (533) Nov 8 00:03:55.079947 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 8 00:03:55.150650 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 8 00:03:55.170494 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 8 00:03:55.170863 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 8 00:03:55.192329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:03:55.203937 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:03:55.218101 disk-uuid[663]: Primary Header is updated. Nov 8 00:03:55.218101 disk-uuid[663]: Secondary Entries is updated. Nov 8 00:03:55.218101 disk-uuid[663]: Secondary Header is updated. Nov 8 00:03:55.227615 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:03:55.235651 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:03:55.253649 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:03:56.261608 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:03:56.263420 disk-uuid[664]: The operation has completed successfully. Nov 8 00:03:56.447186 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:03:56.447402 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:03:56.510871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:03:56.529432 sh[1008]: Success Nov 8 00:03:56.555809 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:03:56.667092 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:03:56.679849 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:03:56.682140 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:03:56.719473 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:03:56.719534 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:03:56.719562 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:03:56.722753 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:03:56.722787 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:03:56.824621 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:03:56.850413 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:03:56.854476 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:03:56.864919 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:03:56.871857 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:03:56.898236 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:03:56.898308 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:03:56.900550 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:03:56.915613 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:03:56.934777 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:03:56.939198 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:03:56.949083 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:03:56.961994 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:03:57.072983 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:03:57.088990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:03:57.152308 systemd-networkd[1201]: lo: Link UP Nov 8 00:03:57.152800 systemd-networkd[1201]: lo: Gained carrier Nov 8 00:03:57.155203 systemd-networkd[1201]: Enumeration completed Nov 8 00:03:57.155964 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:03:57.157481 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:03:57.157488 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:03:57.170826 systemd[1]: Reached target network.target - Network. Nov 8 00:03:57.175235 systemd-networkd[1201]: eth0: Link UP Nov 8 00:03:57.175243 systemd-networkd[1201]: eth0: Gained carrier Nov 8 00:03:57.175263 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:03:57.197688 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.26.1/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:03:57.380077 ignition[1114]: Ignition 2.19.0 Nov 8 00:03:57.380109 ignition[1114]: Stage: fetch-offline Nov 8 00:03:57.384921 ignition[1114]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:03:57.384971 ignition[1114]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:03:57.390434 ignition[1114]: Ignition finished successfully Nov 8 00:03:57.394914 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:03:57.405965 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:03:57.442053 ignition[1210]: Ignition 2.19.0 Nov 8 00:03:57.442667 ignition[1210]: Stage: fetch Nov 8 00:03:57.444776 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:03:57.444805 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:03:57.444995 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:03:57.463671 ignition[1210]: PUT result: OK Nov 8 00:03:57.466842 ignition[1210]: parsed url from cmdline: "" Nov 8 00:03:57.466865 ignition[1210]: no config URL provided Nov 8 00:03:57.466882 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:03:57.466909 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:03:57.466945 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:03:57.469020 ignition[1210]: PUT result: OK Nov 8 00:03:57.469218 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 8 00:03:57.480701 ignition[1210]: GET result: OK Nov 8 00:03:57.480936 ignition[1210]: parsing config with SHA512: 87144edefcfa5776ff4f9a38d468659577387dd03701c680fc8fc2c822ddfdfddca4242ff37e5a3077fe3e2fc903f0fcf5bf754befd9e56ca664d7b051a006c3 Nov 8 00:03:57.491069 unknown[1210]: fetched base config from "system" Nov 8 00:03:57.491126 unknown[1210]: fetched base config from "system" Nov 8 00:03:57.491143 unknown[1210]: fetched user config from "aws" Nov 8 00:03:57.495789 ignition[1210]: fetch: fetch complete Nov 8 00:03:57.500661 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:03:57.495887 ignition[1210]: fetch: fetch passed Nov 8 00:03:57.496142 ignition[1210]: Ignition finished successfully Nov 8 00:03:57.514025 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:03:57.546977 ignition[1216]: Ignition 2.19.0 Nov 8 00:03:57.547006 ignition[1216]: Stage: kargs Nov 8 00:03:57.547744 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:03:57.547770 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:03:57.547925 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:03:57.555583 ignition[1216]: PUT result: OK Nov 8 00:03:57.562954 ignition[1216]: kargs: kargs passed Nov 8 00:03:57.563342 ignition[1216]: Ignition finished successfully Nov 8 00:03:57.569688 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:03:57.578033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:03:57.621363 ignition[1222]: Ignition 2.19.0 Nov 8 00:03:57.621393 ignition[1222]: Stage: disks Nov 8 00:03:57.623265 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:03:57.623335 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:03:57.623640 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:03:57.626840 ignition[1222]: PUT result: OK Nov 8 00:03:57.634061 ignition[1222]: disks: disks passed Nov 8 00:03:57.634170 ignition[1222]: Ignition finished successfully Nov 8 00:03:57.639191 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:03:57.640070 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:03:57.644716 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:03:57.644790 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:03:57.644853 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:03:57.644911 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:03:57.675848 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:03:57.719490 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:03:57.726357 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:03:57.737925 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:03:57.828721 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:03:57.830325 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:03:57.834766 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:03:57.852769 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:03:57.860929 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:03:57.868293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:03:57.871720 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:03:57.871774 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:03:57.891616 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1249) Nov 8 00:03:57.896306 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:03:57.896386 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:03:57.897982 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:03:57.902129 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:03:57.912862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:03:57.925636 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:03:57.928226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:03:58.278417 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:03:58.299623 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:03:58.321007 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:03:58.331891 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:03:58.625718 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:03:58.636698 systemd-networkd[1201]: eth0: Gained IPv6LL Nov 8 00:03:58.640864 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:03:58.653326 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:03:58.666608 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:03:58.667363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:03:58.715161 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:03:58.722687 ignition[1362]: INFO : Ignition 2.19.0 Nov 8 00:03:58.722687 ignition[1362]: INFO : Stage: mount Nov 8 00:03:58.726952 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:03:58.726952 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:03:58.726952 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:03:58.726952 ignition[1362]: INFO : PUT result: OK Nov 8 00:03:58.739580 ignition[1362]: INFO : mount: mount passed Nov 8 00:03:58.742059 ignition[1362]: INFO : Ignition finished successfully Nov 8 00:03:58.746467 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:03:58.757748 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:03:58.837903 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:03:58.868612 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Nov 8 00:03:58.872406 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:03:58.872444 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:03:58.872489 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:03:58.880625 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:03:58.882763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:03:58.920443 ignition[1391]: INFO : Ignition 2.19.0 Nov 8 00:03:58.920443 ignition[1391]: INFO : Stage: files Nov 8 00:03:58.924137 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:03:58.924137 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:03:58.924137 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:03:58.934593 ignition[1391]: INFO : PUT result: OK Nov 8 00:03:58.938993 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:03:58.953519 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:03:58.956840 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:03:58.999766 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:03:59.003368 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:03:59.007071 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:03:59.005213 unknown[1391]: wrote ssh authorized keys file for user: core Nov 8 00:03:59.016971 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:03:59.021544 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 8 00:03:59.283140 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:03:59.562324 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:03:59.597464 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:03:59.597464 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:03:59.597464 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:03:59.597464 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:03:59.597464 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:03:59.597464 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 8 00:04:00.084095 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:04:00.485336 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 8 00:04:00.485336 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:04:00.492718 ignition[1391]: INFO : files: files passed Nov 8 00:04:00.492718 ignition[1391]: INFO : Ignition finished successfully Nov 8 00:04:00.501610 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:04:00.516939 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:04:00.529853 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:04:00.539400 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:04:00.539633 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:04:00.586649 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:00.586649 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:00.595096 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:04:00.601955 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:00.605614 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:04:00.617860 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:04:00.674889 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:04:00.675298 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:04:00.685038 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:04:00.687329 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:04:00.689989 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:04:00.700840 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:04:00.730302 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:00.742966 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:04:00.771342 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:00.771745 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:00.779189 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:04:00.781546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:04:00.781818 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:04:00.794660 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:04:00.797063 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:04:00.799625 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:04:00.808073 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:04:00.810745 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:04:00.813873 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:04:00.816786 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:04:00.822242 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:04:00.833470 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:04:00.836395 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:04:00.841939 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:04:00.842174 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:04:00.845015 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:00.847728 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:00.850631 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:04:00.857618 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:00.869296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:04:00.869535 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:04:00.872437 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:04:00.872702 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:04:00.876084 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:04:00.876282 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:04:00.892467 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:04:00.897324 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:04:00.897808 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:00.917034 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:04:00.923811 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:04:00.927943 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:00.934895 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:04:00.937380 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:04:00.957485 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:04:00.959867 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:04:00.967907 ignition[1444]: INFO : Ignition 2.19.0 Nov 8 00:04:00.969962 ignition[1444]: INFO : Stage: umount Nov 8 00:04:00.969962 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:04:00.969962 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:04:00.969962 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:04:00.983611 ignition[1444]: INFO : PUT result: OK Nov 8 00:04:00.991263 ignition[1444]: INFO : umount: umount passed Nov 8 00:04:01.000037 ignition[1444]: INFO : Ignition finished successfully Nov 8 00:04:00.993595 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:04:00.995731 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:04:00.995991 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:04:01.001399 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:04:01.001607 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:04:01.004353 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:04:01.004493 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:04:01.008829 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:04:01.008926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:04:01.014178 systemd[1]: Stopped target network.target - Network. Nov 8 00:04:01.017828 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:04:01.018556 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:04:01.022618 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:04:01.026624 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:04:01.028767 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:01.028888 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:04:01.033629 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:04:01.037686 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:04:01.037764 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:04:01.044008 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:04:01.044078 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:04:01.046420 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:04:01.046510 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:04:01.051073 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:04:01.051156 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:04:01.052993 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:04:01.062296 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:04:01.064686 systemd-networkd[1201]: eth0: DHCPv6 lease lost Nov 8 00:04:01.074284 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:04:01.074529 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:04:01.082991 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:04:01.083185 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:04:01.092626 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:04:01.093743 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:04:01.107549 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:04:01.108724 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:01.115960 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:04:01.116068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:04:01.134724 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:04:01.147465 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:04:01.147592 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:04:01.150767 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:04:01.150860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:01.153343 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:04:01.153432 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:01.156416 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:04:01.156524 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:01.159520 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:01.196133 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:04:01.196646 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:01.206169 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:04:01.206309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:01.210801 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:04:01.210879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:01.211489 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:04:01.211593 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:04:01.216194 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:04:01.216286 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:04:01.220029 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:04:01.220136 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:04:01.252974 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:04:01.255687 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:04:01.255804 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:01.262024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:04:01.262138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:01.274251 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:04:01.274427 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:04:01.280192 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:04:01.280371 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:04:01.287176 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:04:01.304270 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:04:01.318777 systemd[1]: Switching root. Nov 8 00:04:01.363903 systemd-journald[252]: Journal stopped Nov 8 00:04:03.729632 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Nov 8 00:04:03.729769 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:04:03.729815 kernel: SELinux: policy capability open_perms=1 Nov 8 00:04:03.729847 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:04:03.729885 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:04:03.729916 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:04:03.729947 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:04:03.729978 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:04:03.730009 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:04:03.730041 kernel: audit: type=1403 audit(1762560241.869:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:04:03.730073 systemd[1]: Successfully loaded SELinux policy in 71.585ms. Nov 8 00:04:03.730120 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.991ms. Nov 8 00:04:03.730154 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:04:03.730190 systemd[1]: Detected virtualization amazon. Nov 8 00:04:03.730222 systemd[1]: Detected architecture arm64. Nov 8 00:04:03.730253 systemd[1]: Detected first boot. Nov 8 00:04:03.730286 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:04:03.730319 zram_generator::config[1486]: No configuration found. Nov 8 00:04:03.730353 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:04:03.730385 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:04:03.730414 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:04:03.730450 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:04:03.730486 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:04:03.730520 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:04:03.730553 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:04:03.736360 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:04:03.736408 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:04:03.736460 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:04:03.736502 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:04:03.736534 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:04:03.736661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:04:03.736700 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:04:03.736768 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:04:03.736805 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:04:03.736838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:04:03.736870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:04:03.736900 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:04:03.736934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:04:03.736965 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:04:03.737001 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:04:03.737035 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:04:03.737067 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:04:03.737098 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:04:03.737130 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:04:03.737161 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:04:03.737191 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:04:03.737224 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:04:03.737258 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:04:03.737288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:04:03.737318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:04:03.737348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:04:03.737388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:04:03.737418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:04:03.737448 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:04:03.737478 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:04:03.737510 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:04:03.737544 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:04:03.737593 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:04:03.737628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:04:03.737660 systemd[1]: Reached target machines.target - Containers. Nov 8 00:04:03.737691 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:04:03.737721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:04:03.737755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:04:03.737787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:04:03.737824 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:04:03.737854 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:04:03.737886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:04:03.737929 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:04:03.737960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:04:03.737992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:04:03.738025 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:04:03.738055 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:04:03.738090 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:04:03.738119 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:04:03.738149 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:04:03.738179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:04:03.738208 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:04:03.738241 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:04:03.738270 kernel: fuse: init (API version 7.39) Nov 8 00:04:03.738301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:04:03.738342 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:04:03.738377 systemd[1]: Stopped verity-setup.service. Nov 8 00:04:03.738411 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:04:03.738445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:04:03.738474 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:04:03.738504 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:04:03.738532 kernel: loop: module loaded Nov 8 00:04:03.738561 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:04:03.746684 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:04:03.746721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:04:03.746760 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:04:03.746836 systemd-journald[1571]: Collecting audit messages is disabled. Nov 8 00:04:03.746890 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:04:03.746923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:04:03.746959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:04:03.746993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:04:03.747026 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:04:03.747056 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:04:03.747085 systemd-journald[1571]: Journal started Nov 8 00:04:03.747136 systemd-journald[1571]: Runtime Journal (/run/log/journal/ec2e4fdbb5130f65b892fd04b34f7aa5) is 8.0M, max 75.3M, 67.3M free. Nov 8 00:04:03.080628 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:04:03.169843 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 8 00:04:03.170652 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:04:03.751517 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:04:03.759925 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:04:03.761973 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:04:03.762882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:04:03.769793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:04:03.775667 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:04:03.779139 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:04:03.782348 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:04:03.791637 kernel: ACPI: bus type drm_connector registered Nov 8 00:04:03.796327 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:04:03.796746 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:04:03.817062 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:04:03.830552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:04:03.837784 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:04:03.840368 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:04:03.840438 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:04:03.846855 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:04:03.859514 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:04:03.867921 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:04:03.871073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:04:03.880000 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:04:03.886905 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:04:03.889748 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:04:03.895069 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:04:03.898188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:04:03.903879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:04:03.920046 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:04:03.934019 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:04:03.942510 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:04:03.945443 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:04:03.950242 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:04:04.023721 systemd-journald[1571]: Time spent on flushing to /var/log/journal/ec2e4fdbb5130f65b892fd04b34f7aa5 is 174.135ms for 904 entries. Nov 8 00:04:04.023721 systemd-journald[1571]: System Journal (/var/log/journal/ec2e4fdbb5130f65b892fd04b34f7aa5) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:04:04.223295 kernel: loop0: detected capacity change from 0 to 52536 Nov 8 00:04:04.223360 systemd-journald[1571]: Received client request to flush runtime journal. Nov 8 00:04:04.223424 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:04:04.025940 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:04:04.029031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:04:04.039583 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:04:04.126629 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:04:04.143928 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:04:04.149151 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:04:04.153051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:04:04.161068 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:04:04.185041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:04:04.197011 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:04:04.233185 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:04:04.259365 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:04:04.262597 kernel: loop1: detected capacity change from 0 to 114432 Nov 8 00:04:04.299103 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Nov 8 00:04:04.299142 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Nov 8 00:04:04.308090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:04:04.382624 kernel: loop2: detected capacity change from 0 to 200800 Nov 8 00:04:04.549706 kernel: loop3: detected capacity change from 0 to 114328 Nov 8 00:04:04.663692 kernel: loop4: detected capacity change from 0 to 52536 Nov 8 00:04:04.683609 kernel: loop5: detected capacity change from 0 to 114432 Nov 8 00:04:04.701608 kernel: loop6: detected capacity change from 0 to 200800 Nov 8 00:04:04.731816 kernel: loop7: detected capacity change from 0 to 114328 Nov 8 00:04:04.742834 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 8 00:04:04.744438 (sd-merge)[1641]: Merged extensions into '/usr'. Nov 8 00:04:04.753884 systemd[1]: Reloading requested from client PID 1615 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:04:04.753908 systemd[1]: Reloading... Nov 8 00:04:05.000618 zram_generator::config[1667]: No configuration found. Nov 8 00:04:05.304809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:04:05.417446 systemd[1]: Reloading finished in 661 ms. Nov 8 00:04:05.452874 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:04:05.461293 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:04:05.476944 systemd[1]: Starting ensure-sysext.service... Nov 8 00:04:05.486965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:04:05.496936 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:04:05.533777 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:04:05.533815 systemd[1]: Reloading... Nov 8 00:04:05.548052 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:04:05.548807 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:04:05.557699 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:04:05.562719 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Nov 8 00:04:05.562891 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Nov 8 00:04:05.579069 ldconfig[1610]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:04:05.579497 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:04:05.579510 systemd-tmpfiles[1720]: Skipping /boot Nov 8 00:04:05.617813 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:04:05.617843 systemd-tmpfiles[1720]: Skipping /boot Nov 8 00:04:05.650956 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Nov 8 00:04:05.741613 zram_generator::config[1751]: No configuration found. Nov 8 00:04:05.901412 (udev-worker)[1763]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:04:06.140490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:04:06.159613 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1796) Nov 8 00:04:06.318716 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:04:06.321145 systemd[1]: Reloading finished in 786 ms. Nov 8 00:04:06.365534 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:04:06.376410 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:04:06.393679 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:04:06.434439 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:04:06.460618 systemd[1]: Finished ensure-sysext.service. Nov 8 00:04:06.482507 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:04:06.508679 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:06.515889 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:04:06.521451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:04:06.531990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:04:06.547863 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:04:06.565946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:04:06.577537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:04:06.586065 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:04:06.596112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:04:06.604624 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:04:06.618681 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:04:06.627109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:04:06.641644 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:04:06.640887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:04:06.647839 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:04:06.662102 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:04:06.670168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:04:06.678047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:04:06.679063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:04:06.683353 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:04:06.685729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:04:06.694241 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:04:06.712587 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:04:06.751675 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:04:06.752230 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:04:06.763242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:04:06.763591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:04:06.768926 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:04:06.770809 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:04:06.777368 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:04:06.798632 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:04:06.807698 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:04:06.819096 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:04:06.830765 augenrules[1954]: No rules Nov 8 00:04:06.830899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:04:06.836716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:04:06.846841 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:06.849731 lvm[1956]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:04:06.860136 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:04:06.878041 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:04:06.912177 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:04:06.915758 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:04:06.919038 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:04:07.044556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:04:07.055528 systemd-networkd[1937]: lo: Link UP Nov 8 00:04:07.056128 systemd-networkd[1937]: lo: Gained carrier Nov 8 00:04:07.058959 systemd-networkd[1937]: Enumeration completed Nov 8 00:04:07.059361 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:04:07.062546 systemd-networkd[1937]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:07.062736 systemd-networkd[1937]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:04:07.065239 systemd-networkd[1937]: eth0: Link UP Nov 8 00:04:07.065829 systemd-networkd[1937]: eth0: Gained carrier Nov 8 00:04:07.066000 systemd-networkd[1937]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:04:07.076923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:04:07.081725 systemd-networkd[1937]: eth0: DHCPv4 address 172.31.26.1/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:04:07.086672 systemd-resolved[1938]: Positive Trust Anchors: Nov 8 00:04:07.086713 systemd-resolved[1938]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:04:07.086783 systemd-resolved[1938]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:04:07.102557 systemd-resolved[1938]: Defaulting to hostname 'linux'. Nov 8 00:04:07.105952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:04:07.108662 systemd[1]: Reached target network.target - Network. Nov 8 00:04:07.110710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:04:07.113322 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:04:07.115949 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:04:07.118776 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:04:07.122481 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:04:07.125239 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:04:07.128095 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:04:07.131017 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:04:07.131189 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:04:07.133247 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:04:07.136895 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:04:07.142305 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:04:07.152066 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:04:07.155525 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:04:07.158198 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:04:07.160458 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:04:07.162737 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:04:07.162801 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:04:07.170759 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:04:07.177676 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:04:07.185915 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:04:07.192840 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:04:07.198471 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:04:07.201774 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:04:07.205877 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:04:07.221116 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:04:07.232454 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:04:07.255942 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:04:07.263475 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:04:07.270207 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:04:07.282635 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:04:07.287285 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:04:07.290044 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:04:07.292135 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:04:07.299955 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:04:07.322123 jq[1984]: false Nov 8 00:04:07.326625 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:04:07.327641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:04:07.383009 extend-filesystems[1985]: Found loop4 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found loop5 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found loop6 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found loop7 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1p1 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1p2 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1p3 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found usr Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1p4 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1p6 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1p7 Nov 8 00:04:07.415497 extend-filesystems[1985]: Found nvme0n1p9 Nov 8 00:04:07.415497 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Nov 8 00:04:07.401413 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:04:07.469751 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:04:46 UTC 2025 (1): Starting Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:04:46 UTC 2025 (1): Starting Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: ---------------------------------------------------- Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: corporation. Support and training for ntp-4 are Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: available at https://www.nwtime.org/support Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: ---------------------------------------------------- Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: proto: precision = 0.096 usec (-23) Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: basedate set to 2025-10-26 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: gps base set to 2025-10-26 (week 2390) Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Listen normally on 3 eth0 172.31.26.1:123 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Listen normally on 4 lo [::1]:123 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: bind(21) AF_INET6 fe80::4c3:11ff:fe9c:856d%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4c3:11ff:fe9c:856d%2#123 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: failed to init interface for address fe80::4c3:11ff:fe9c:856d%2 Nov 8 00:04:07.496875 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Nov 8 00:04:07.500110 update_engine[1993]: I20251108 00:04:07.416410 1993 main.cc:92] Flatcar Update Engine starting Nov 8 00:04:07.401786 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:04:07.469800 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:04:07.549923 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:07.549923 ntpd[1987]: 8 Nov 00:04:07 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:07.427388 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:04:07.550107 jq[1996]: true Nov 8 00:04:07.469820 ntpd[1987]: ---------------------------------------------------- Nov 8 00:04:07.553888 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Nov 8 00:04:07.563255 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 8 00:04:07.427766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:04:07.469840 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:04:07.563794 extend-filesystems[2028]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:04:07.522965 (ntainerd)[2017]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:04:07.469859 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:04:07.547311 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:04:07.469880 ntpd[1987]: corporation. Support and training for ntp-4 are Nov 8 00:04:07.561321 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:04:07.578484 tar[2013]: linux-arm64/LICENSE Nov 8 00:04:07.578484 tar[2013]: linux-arm64/helm Nov 8 00:04:07.471354 ntpd[1987]: available at https://www.nwtime.org/support Nov 8 00:04:07.561382 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:04:07.599742 jq[2020]: true Nov 8 00:04:07.471374 ntpd[1987]: ---------------------------------------------------- Nov 8 00:04:07.581683 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:04:07.479649 ntpd[1987]: proto: precision = 0.096 usec (-23) Nov 8 00:04:07.581728 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:04:07.481369 ntpd[1987]: basedate set to 2025-10-26 Nov 8 00:04:07.481415 ntpd[1987]: gps base set to 2025-10-26 (week 2390) Nov 8 00:04:07.613747 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:04:07.617882 update_engine[1993]: I20251108 00:04:07.612761 1993 update_check_scheduler.cc:74] Next update check in 2m54s Nov 8 00:04:07.488164 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:04:07.488245 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:04:07.488528 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:04:07.489505 ntpd[1987]: Listen normally on 3 eth0 172.31.26.1:123 Nov 8 00:04:07.489616 ntpd[1987]: Listen normally on 4 lo [::1]:123 Nov 8 00:04:07.489732 ntpd[1987]: bind(21) AF_INET6 fe80::4c3:11ff:fe9c:856d%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:04:07.489779 ntpd[1987]: unable to create socket on eth0 (5) for fe80::4c3:11ff:fe9c:856d%2#123 Nov 8 00:04:07.489809 ntpd[1987]: failed to init interface for address fe80::4c3:11ff:fe9c:856d%2 Nov 8 00:04:07.489866 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Nov 8 00:04:07.508359 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:07.508410 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:04:07.547017 dbus-daemon[1983]: [system] SELinux support is enabled Nov 8 00:04:07.591082 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1937 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:04:07.626305 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:04:07.621832 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:04:07.625518 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:04:07.655304 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:04:07.722196 coreos-metadata[1982]: Nov 08 00:04:07.722 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:04:07.732714 coreos-metadata[1982]: Nov 08 00:04:07.732 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 8 00:04:07.734769 coreos-metadata[1982]: Nov 08 00:04:07.734 INFO Fetch successful Nov 8 00:04:07.734769 coreos-metadata[1982]: Nov 08 00:04:07.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 8 00:04:07.739007 coreos-metadata[1982]: Nov 08 00:04:07.738 INFO Fetch successful Nov 8 00:04:07.739007 coreos-metadata[1982]: Nov 08 00:04:07.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 8 00:04:07.746845 coreos-metadata[1982]: Nov 08 00:04:07.746 INFO Fetch successful Nov 8 00:04:07.746845 coreos-metadata[1982]: Nov 08 00:04:07.746 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 8 00:04:07.748667 coreos-metadata[1982]: Nov 08 00:04:07.748 INFO Fetch successful Nov 8 00:04:07.748935 coreos-metadata[1982]: Nov 08 00:04:07.748 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 8 00:04:07.750667 coreos-metadata[1982]: Nov 08 00:04:07.749 INFO Fetch failed with 404: resource not found Nov 8 00:04:07.750667 coreos-metadata[1982]: Nov 08 00:04:07.750 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 8 00:04:07.751779 coreos-metadata[1982]: Nov 08 00:04:07.751 INFO Fetch successful Nov 8 00:04:07.752041 coreos-metadata[1982]: Nov 08 00:04:07.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 8 00:04:07.760032 coreos-metadata[1982]: Nov 08 00:04:07.759 INFO Fetch successful Nov 8 00:04:07.760032 coreos-metadata[1982]: Nov 08 00:04:07.759 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 8 00:04:07.761881 coreos-metadata[1982]: Nov 08 00:04:07.761 INFO Fetch successful Nov 8 00:04:07.762160 coreos-metadata[1982]: Nov 08 00:04:07.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 8 00:04:07.763623 coreos-metadata[1982]: Nov 08 00:04:07.763 INFO Fetch successful Nov 8 00:04:07.765637 coreos-metadata[1982]: Nov 08 00:04:07.763 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 8 00:04:07.772831 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 8 00:04:07.774795 coreos-metadata[1982]: Nov 08 00:04:07.774 INFO Fetch successful Nov 8 00:04:07.817458 extend-filesystems[2028]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 8 00:04:07.817458 extend-filesystems[2028]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:04:07.817458 extend-filesystems[2028]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 8 00:04:07.825697 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Nov 8 00:04:07.831215 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:04:07.831619 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:04:07.893036 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:04:07.904496 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:04:07.921033 systemd-logind[1992]: Watching system buttons on /dev/input/event0 (Power Button) Nov 8 00:04:07.927711 systemd-logind[1992]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 8 00:04:07.928067 systemd-logind[1992]: New seat seat0. Nov 8 00:04:07.931078 systemd[1]: Starting sshkeys.service... Nov 8 00:04:07.969516 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:04:07.989936 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:04:07.993488 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:04:08.010960 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:04:08.046442 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:04:08.143615 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1756) Nov 8 00:04:08.213319 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:04:08.305835 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:04:08.306082 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:04:08.312328 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2038 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:04:08.336777 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:04:08.432377 systemd-networkd[1937]: eth0: Gained IPv6LL Nov 8 00:04:08.441405 polkitd[2135]: Started polkitd version 121 Nov 8 00:04:08.447688 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:04:08.454102 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:04:08.485913 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 8 00:04:08.494993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:08.503702 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:04:08.519853 polkitd[2135]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:04:08.519991 polkitd[2135]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:04:08.528718 coreos-metadata[2086]: Nov 08 00:04:08.528 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:04:08.528718 coreos-metadata[2086]: Nov 08 00:04:08.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 8 00:04:08.529468 polkitd[2135]: Finished loading, compiling and executing 2 rules Nov 8 00:04:08.536724 coreos-metadata[2086]: Nov 08 00:04:08.534 INFO Fetch successful Nov 8 00:04:08.536724 coreos-metadata[2086]: Nov 08 00:04:08.534 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:04:08.536955 coreos-metadata[2086]: Nov 08 00:04:08.536 INFO Fetch successful Nov 8 00:04:08.545119 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:04:08.546091 unknown[2086]: wrote ssh authorized keys file for user: core Nov 8 00:04:08.549333 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:04:08.556873 polkitd[2135]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:04:08.599939 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:04:08.687781 systemd-hostnamed[2038]: Hostname set to (transient) Nov 8 00:04:08.691726 systemd-resolved[1938]: System hostname changed to 'ip-172-31-26-1'. Nov 8 00:04:08.708687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:04:08.719919 update-ssh-keys[2183]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:04:08.722864 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:04:08.737657 systemd[1]: Finished sshkeys.service. Nov 8 00:04:08.742186 containerd[2017]: time="2025-11-08T00:04:08.737586360Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:04:08.809062 amazon-ssm-agent[2165]: Initializing new seelog logger Nov 8 00:04:08.812154 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete Nov 8 00:04:08.812154 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.812154 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.812154 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 processing appconfig overrides Nov 8 00:04:08.813870 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.813990 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.814202 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 processing appconfig overrides Nov 8 00:04:08.816170 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.816170 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.816170 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 processing appconfig overrides Nov 8 00:04:08.817716 amazon-ssm-agent[2165]: 2025-11-08 00:04:08 INFO Proxy environment variables: Nov 8 00:04:08.821808 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.822491 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:04:08.822862 amazon-ssm-agent[2165]: 2025/11/08 00:04:08 processing appconfig overrides Nov 8 00:04:08.907279 containerd[2017]: time="2025-11-08T00:04:08.907192465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:08.913647 containerd[2017]: time="2025-11-08T00:04:08.913511353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:08.913647 containerd[2017]: time="2025-11-08T00:04:08.913600309Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:04:08.913829 containerd[2017]: time="2025-11-08T00:04:08.913665373Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:04:08.914551 containerd[2017]: time="2025-11-08T00:04:08.913970149Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:04:08.914551 containerd[2017]: time="2025-11-08T00:04:08.914017885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:08.914551 containerd[2017]: time="2025-11-08T00:04:08.914142805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:08.914551 containerd[2017]: time="2025-11-08T00:04:08.914173873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:08.914551 containerd[2017]: time="2025-11-08T00:04:08.914462761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:08.914551 containerd[2017]: time="2025-11-08T00:04:08.914494741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:08.914551 containerd[2017]: time="2025-11-08T00:04:08.914526709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:08.917170 containerd[2017]: time="2025-11-08T00:04:08.914554105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:08.917170 containerd[2017]: time="2025-11-08T00:04:08.916806169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:08.917370 containerd[2017]: time="2025-11-08T00:04:08.917235985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:04:08.917536 containerd[2017]: time="2025-11-08T00:04:08.917466745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:04:08.917623 containerd[2017]: time="2025-11-08T00:04:08.917527369Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:04:08.917963 amazon-ssm-agent[2165]: 2025-11-08 00:04:08 INFO http_proxy: Nov 8 00:04:08.921040 containerd[2017]: time="2025-11-08T00:04:08.920803321Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:04:08.921040 containerd[2017]: time="2025-11-08T00:04:08.920936845Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:04:08.937141 containerd[2017]: time="2025-11-08T00:04:08.935671297Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:04:08.937141 containerd[2017]: time="2025-11-08T00:04:08.935794885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:04:08.937141 containerd[2017]: time="2025-11-08T00:04:08.935919001Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:04:08.937141 containerd[2017]: time="2025-11-08T00:04:08.935958889Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:04:08.937141 containerd[2017]: time="2025-11-08T00:04:08.936036565Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:04:08.937141 containerd[2017]: time="2025-11-08T00:04:08.936332125Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:04:08.937141 containerd[2017]: time="2025-11-08T00:04:08.937074541Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:04:08.937515 containerd[2017]: time="2025-11-08T00:04:08.937322341Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:04:08.937515 containerd[2017]: time="2025-11-08T00:04:08.937356961Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:04:08.937515 containerd[2017]: time="2025-11-08T00:04:08.937386853Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:04:08.937515 containerd[2017]: time="2025-11-08T00:04:08.937418653Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.937515 containerd[2017]: time="2025-11-08T00:04:08.937449649Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.937515 containerd[2017]: time="2025-11-08T00:04:08.937479553Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.937806 containerd[2017]: time="2025-11-08T00:04:08.937510609Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.937806 containerd[2017]: time="2025-11-08T00:04:08.937546669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.937806 containerd[2017]: time="2025-11-08T00:04:08.937627369Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.937806 containerd[2017]: time="2025-11-08T00:04:08.937667281Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.937806 containerd[2017]: time="2025-11-08T00:04:08.937699141Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:04:08.938022 containerd[2017]: time="2025-11-08T00:04:08.937918897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938022 containerd[2017]: time="2025-11-08T00:04:08.937956229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938022 containerd[2017]: time="2025-11-08T00:04:08.937986469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938163 containerd[2017]: time="2025-11-08T00:04:08.938017909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938163 containerd[2017]: time="2025-11-08T00:04:08.938047957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938163 containerd[2017]: time="2025-11-08T00:04:08.938078773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938163 containerd[2017]: time="2025-11-08T00:04:08.938106337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938163 containerd[2017]: time="2025-11-08T00:04:08.938137117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938356 containerd[2017]: time="2025-11-08T00:04:08.938167393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938356 containerd[2017]: time="2025-11-08T00:04:08.938200765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938356 containerd[2017]: time="2025-11-08T00:04:08.938230021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938356 containerd[2017]: time="2025-11-08T00:04:08.938263573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.938356 containerd[2017]: time="2025-11-08T00:04:08.938316361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.938351437Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.938396221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.938426269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.938461237Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941410777Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941502421Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941531713Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941561569Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941912737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941945797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941970685Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:04:08.944153 containerd[2017]: time="2025-11-08T00:04:08.941997373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.942658681Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.942769909Z" level=info msg="Connect containerd service" Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.942827053Z" level=info msg="using legacy CRI server" Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.942844909Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.942992329Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.944294989Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.944660833Z" level=info msg="Start subscribing containerd event" Nov 8 00:04:08.944796 containerd[2017]: time="2025-11-08T00:04:08.944739085Z" level=info msg="Start recovering state" Nov 8 00:04:08.945314 containerd[2017]: time="2025-11-08T00:04:08.944870929Z" level=info msg="Start event monitor" Nov 8 00:04:08.945314 containerd[2017]: time="2025-11-08T00:04:08.944894857Z" level=info msg="Start snapshots syncer" Nov 8 00:04:08.945314 containerd[2017]: time="2025-11-08T00:04:08.944915977Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:04:08.945314 containerd[2017]: time="2025-11-08T00:04:08.944935321Z" level=info msg="Start streaming server" Nov 8 00:04:08.952654 containerd[2017]: time="2025-11-08T00:04:08.947064673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:04:08.952654 containerd[2017]: time="2025-11-08T00:04:08.947190889Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:04:08.952654 containerd[2017]: time="2025-11-08T00:04:08.950834173Z" level=info msg="containerd successfully booted in 0.220291s" Nov 8 00:04:08.947744 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:04:09.019044 amazon-ssm-agent[2165]: 2025-11-08 00:04:08 INFO no_proxy: Nov 8 00:04:09.120300 amazon-ssm-agent[2165]: 2025-11-08 00:04:08 INFO https_proxy: Nov 8 00:04:09.223518 amazon-ssm-agent[2165]: 2025-11-08 00:04:08 INFO Checking if agent identity type OnPrem can be assumed Nov 8 00:04:09.325591 amazon-ssm-agent[2165]: 2025-11-08 00:04:08 INFO Checking if agent identity type EC2 can be assumed Nov 8 00:04:09.424789 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO Agent will take identity from EC2 Nov 8 00:04:09.525644 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:04:09.615281 tar[2013]: linux-arm64/README.md Nov 8 00:04:09.624683 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:04:09.654532 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:04:09.724046 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:04:09.823123 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [amazon-ssm-agent] Starting Core Agent Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [Registrar] Starting registrar module Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [EC2Identity] EC2 registration was successful. Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [CredentialRefresher] credentialRefresher has started Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [CredentialRefresher] Starting credentials refresher loop Nov 8 00:04:09.841942 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 8 00:04:09.922916 amazon-ssm-agent[2165]: 2025-11-08 00:04:09 INFO [CredentialRefresher] Next credential rotation will be in 31.3249667074 minutes Nov 8 00:04:10.175307 sshd_keygen[2029]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:04:10.215760 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:04:10.229828 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:04:10.241084 systemd[1]: Started sshd@0-172.31.26.1:22-139.178.89.65:37410.service - OpenSSH per-connection server daemon (139.178.89.65:37410). Nov 8 00:04:10.259846 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:04:10.260330 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:04:10.273236 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:04:10.310414 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:04:10.326750 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:04:10.340307 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:04:10.343453 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:04:10.473653 ntpd[1987]: Listen normally on 6 eth0 [fe80::4c3:11ff:fe9c:856d%2]:123 Nov 8 00:04:10.474297 ntpd[1987]: 8 Nov 00:04:10 ntpd[1987]: Listen normally on 6 eth0 [fe80::4c3:11ff:fe9c:856d%2]:123 Nov 8 00:04:10.491705 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 37410 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:10.494437 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:10.515755 systemd-logind[1992]: New session 1 of user core. Nov 8 00:04:10.520270 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:04:10.531063 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:04:10.559000 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:04:10.576827 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:04:10.598024 (systemd)[2229]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:04:10.826768 systemd[2229]: Queued start job for default target default.target. Nov 8 00:04:10.835857 systemd[2229]: Created slice app.slice - User Application Slice. Nov 8 00:04:10.835924 systemd[2229]: Reached target paths.target - Paths. Nov 8 00:04:10.835958 systemd[2229]: Reached target timers.target - Timers. Nov 8 00:04:10.838756 systemd[2229]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:04:10.867900 systemd[2229]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:04:10.868158 systemd[2229]: Reached target sockets.target - Sockets. Nov 8 00:04:10.868206 systemd[2229]: Reached target basic.target - Basic System. Nov 8 00:04:10.868303 systemd[2229]: Reached target default.target - Main User Target. Nov 8 00:04:10.868371 systemd[2229]: Startup finished in 257ms. Nov 8 00:04:10.868543 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:04:10.879899 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:04:10.909263 amazon-ssm-agent[2165]: 2025-11-08 00:04:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 8 00:04:11.012506 amazon-ssm-agent[2165]: 2025-11-08 00:04:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2240) started Nov 8 00:04:11.055460 systemd[1]: Started sshd@1-172.31.26.1:22-139.178.89.65:37426.service - OpenSSH per-connection server daemon (139.178.89.65:37426). Nov 8 00:04:11.112747 amazon-ssm-agent[2165]: 2025-11-08 00:04:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 8 00:04:11.274044 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 37426 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:11.276855 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:11.286997 systemd-logind[1992]: New session 2 of user core. Nov 8 00:04:11.292874 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:04:11.387891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:11.392558 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:04:11.398738 systemd[1]: Startup finished in 1.273s (kernel) + 9.024s (initrd) + 9.600s (userspace) = 19.898s. Nov 8 00:04:11.400246 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:11.440881 sshd[2248]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:11.446999 systemd[1]: sshd@1-172.31.26.1:22-139.178.89.65:37426.service: Deactivated successfully. Nov 8 00:04:11.451224 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:04:11.452488 systemd-logind[1992]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:04:11.454635 systemd-logind[1992]: Removed session 2. Nov 8 00:04:11.472690 systemd[1]: Started sshd@2-172.31.26.1:22-139.178.89.65:37436.service - OpenSSH per-connection server daemon (139.178.89.65:37436). Nov 8 00:04:11.653640 sshd[2268]: Accepted publickey for core from 139.178.89.65 port 37436 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:11.656862 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:11.667677 systemd-logind[1992]: New session 3 of user core. Nov 8 00:04:11.672883 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:04:11.794940 sshd[2268]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:11.802252 systemd[1]: sshd@2-172.31.26.1:22-139.178.89.65:37436.service: Deactivated successfully. Nov 8 00:04:11.806134 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:04:11.807816 systemd-logind[1992]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:04:11.810106 systemd-logind[1992]: Removed session 3. Nov 8 00:04:11.834074 systemd[1]: Started sshd@3-172.31.26.1:22-139.178.89.65:37438.service - OpenSSH per-connection server daemon (139.178.89.65:37438). Nov 8 00:04:12.017720 sshd[2279]: Accepted publickey for core from 139.178.89.65 port 37438 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:12.020856 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:12.030663 systemd-logind[1992]: New session 4 of user core. Nov 8 00:04:12.035847 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:04:12.170214 sshd[2279]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:12.178268 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:04:12.180386 systemd[1]: sshd@3-172.31.26.1:22-139.178.89.65:37438.service: Deactivated successfully. Nov 8 00:04:12.185761 systemd-logind[1992]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:04:12.188521 systemd-logind[1992]: Removed session 4. Nov 8 00:04:12.209152 systemd[1]: Started sshd@4-172.31.26.1:22-139.178.89.65:37452.service - OpenSSH per-connection server daemon (139.178.89.65:37452). Nov 8 00:04:12.397820 sshd[2286]: Accepted publickey for core from 139.178.89.65 port 37452 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:12.400689 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:12.410122 systemd-logind[1992]: New session 5 of user core. Nov 8 00:04:12.418867 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:04:12.531650 kubelet[2259]: E1108 00:04:12.531043 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:12.536810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:12.537182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:12.538799 systemd[1]: kubelet.service: Consumed 1.290s CPU time. Nov 8 00:04:12.575870 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:04:12.576547 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:12.591802 sudo[2290]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:12.616881 sshd[2286]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:12.622165 systemd[1]: sshd@4-172.31.26.1:22-139.178.89.65:37452.service: Deactivated successfully. Nov 8 00:04:12.626206 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:04:12.629673 systemd-logind[1992]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:04:12.631559 systemd-logind[1992]: Removed session 5. Nov 8 00:04:12.662126 systemd[1]: Started sshd@5-172.31.26.1:22-139.178.89.65:37458.service - OpenSSH per-connection server daemon (139.178.89.65:37458). Nov 8 00:04:12.850494 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 37458 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:12.853157 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:12.860467 systemd-logind[1992]: New session 6 of user core. Nov 8 00:04:12.868828 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:04:12.975944 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:04:12.976689 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:12.983322 sudo[2300]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:12.993813 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:04:12.994438 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:13.016210 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:13.032535 auditctl[2303]: No rules Nov 8 00:04:13.033358 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:04:13.033767 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:13.040343 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:04:13.103626 augenrules[2321]: No rules Nov 8 00:04:13.106376 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:04:13.109079 sudo[2299]: pam_unix(sudo:session): session closed for user root Nov 8 00:04:13.133520 sshd[2296]: pam_unix(sshd:session): session closed for user core Nov 8 00:04:13.139703 systemd[1]: sshd@5-172.31.26.1:22-139.178.89.65:37458.service: Deactivated successfully. Nov 8 00:04:13.143111 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:04:13.145992 systemd-logind[1992]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:04:13.148016 systemd-logind[1992]: Removed session 6. Nov 8 00:04:13.173086 systemd[1]: Started sshd@6-172.31.26.1:22-139.178.89.65:37466.service - OpenSSH per-connection server daemon (139.178.89.65:37466). Nov 8 00:04:13.354369 sshd[2329]: Accepted publickey for core from 139.178.89.65 port 37466 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:04:13.357297 sshd[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:04:13.365775 systemd-logind[1992]: New session 7 of user core. Nov 8 00:04:13.375820 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:04:13.480708 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:04:13.481835 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:04:14.166621 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:04:14.174070 (dockerd)[2348]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:04:14.663102 systemd-resolved[1938]: Clock change detected. Flushing caches. Nov 8 00:04:14.916172 dockerd[2348]: time="2025-11-08T00:04:14.915844169Z" level=info msg="Starting up" Nov 8 00:04:15.178527 dockerd[2348]: time="2025-11-08T00:04:15.178069178Z" level=info msg="Loading containers: start." Nov 8 00:04:15.427054 kernel: Initializing XFRM netlink socket Nov 8 00:04:15.487549 (udev-worker)[2372]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:04:15.579135 systemd-networkd[1937]: docker0: Link UP Nov 8 00:04:15.609988 dockerd[2348]: time="2025-11-08T00:04:15.609512296Z" level=info msg="Loading containers: done." Nov 8 00:04:15.633588 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2080957856-merged.mount: Deactivated successfully. Nov 8 00:04:15.641822 dockerd[2348]: time="2025-11-08T00:04:15.641745616Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:04:15.642092 dockerd[2348]: time="2025-11-08T00:04:15.641921884Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:04:15.642157 dockerd[2348]: time="2025-11-08T00:04:15.642136336Z" level=info msg="Daemon has completed initialization" Nov 8 00:04:15.710800 dockerd[2348]: time="2025-11-08T00:04:15.709877429Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:04:15.710191 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:04:17.412703 containerd[2017]: time="2025-11-08T00:04:17.412216517Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:04:18.199447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339245959.mount: Deactivated successfully. Nov 8 00:04:19.794834 containerd[2017]: time="2025-11-08T00:04:19.794742225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:19.797115 containerd[2017]: time="2025-11-08T00:04:19.797046177Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574510" Nov 8 00:04:19.799265 containerd[2017]: time="2025-11-08T00:04:19.799187325Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:19.805300 containerd[2017]: time="2025-11-08T00:04:19.805199973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:19.807996 containerd[2017]: time="2025-11-08T00:04:19.807591969Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 2.395306152s" Nov 8 00:04:19.807996 containerd[2017]: time="2025-11-08T00:04:19.807652221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 8 00:04:19.808672 containerd[2017]: time="2025-11-08T00:04:19.808630809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:04:22.331807 containerd[2017]: time="2025-11-08T00:04:22.331734970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:22.333919 containerd[2017]: time="2025-11-08T00:04:22.333864730Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132143" Nov 8 00:04:22.335230 containerd[2017]: time="2025-11-08T00:04:22.334378126Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:22.340140 containerd[2017]: time="2025-11-08T00:04:22.340073014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:22.342799 containerd[2017]: time="2025-11-08T00:04:22.342730978Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 2.533932145s" Nov 8 00:04:22.342799 containerd[2017]: time="2025-11-08T00:04:22.342793618Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 8 00:04:22.343786 containerd[2017]: time="2025-11-08T00:04:22.343731358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:04:22.803944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:04:22.817345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:23.349727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:23.370641 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:23.450051 kubelet[2562]: E1108 00:04:23.449938 2562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:23.457219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:23.457592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:23.959079 containerd[2017]: time="2025-11-08T00:04:23.958452950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:23.960322 containerd[2017]: time="2025-11-08T00:04:23.960192614Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191884" Nov 8 00:04:23.962957 containerd[2017]: time="2025-11-08T00:04:23.961388354Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:23.969136 containerd[2017]: time="2025-11-08T00:04:23.969083054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:23.970333 containerd[2017]: time="2025-11-08T00:04:23.970269734Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 1.626478088s" Nov 8 00:04:23.970446 containerd[2017]: time="2025-11-08T00:04:23.970331870Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 8 00:04:23.971255 containerd[2017]: time="2025-11-08T00:04:23.971172098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:04:25.368761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819069976.mount: Deactivated successfully. Nov 8 00:04:25.770698 containerd[2017]: time="2025-11-08T00:04:25.770525151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:25.773052 containerd[2017]: time="2025-11-08T00:04:25.772716087Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789028" Nov 8 00:04:25.774213 containerd[2017]: time="2025-11-08T00:04:25.774130875Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:25.778075 containerd[2017]: time="2025-11-08T00:04:25.777744867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:25.779922 containerd[2017]: time="2025-11-08T00:04:25.779312931Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.808081685s" Nov 8 00:04:25.779922 containerd[2017]: time="2025-11-08T00:04:25.779374755Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 8 00:04:25.780316 containerd[2017]: time="2025-11-08T00:04:25.780261999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:04:26.364310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365324817.mount: Deactivated successfully. Nov 8 00:04:27.953221 containerd[2017]: time="2025-11-08T00:04:27.953152530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:27.955449 containerd[2017]: time="2025-11-08T00:04:27.955380594Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Nov 8 00:04:27.955449 containerd[2017]: time="2025-11-08T00:04:27.955771278Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:27.964702 containerd[2017]: time="2025-11-08T00:04:27.963068502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:27.964702 containerd[2017]: time="2025-11-08T00:04:27.964512366Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.184185819s" Nov 8 00:04:27.964702 containerd[2017]: time="2025-11-08T00:04:27.964556622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 8 00:04:27.966469 containerd[2017]: time="2025-11-08T00:04:27.966196158Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:04:28.516494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454277443.mount: Deactivated successfully. Nov 8 00:04:28.586087 containerd[2017]: time="2025-11-08T00:04:28.585231017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:28.606464 containerd[2017]: time="2025-11-08T00:04:28.605921141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Nov 8 00:04:28.630554 containerd[2017]: time="2025-11-08T00:04:28.630488753Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:28.667784 containerd[2017]: time="2025-11-08T00:04:28.667696565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:28.670252 containerd[2017]: time="2025-11-08T00:04:28.669504173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 703.252659ms" Nov 8 00:04:28.670252 containerd[2017]: time="2025-11-08T00:04:28.669568277Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 8 00:04:28.671383 containerd[2017]: time="2025-11-08T00:04:28.671332949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:04:33.553518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:04:33.568565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:34.353939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:34.369681 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:04:34.466173 kubelet[2689]: E1108 00:04:34.466094 2689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:04:34.472434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:04:34.472771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:04:34.513495 containerd[2017]: time="2025-11-08T00:04:34.513416710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:34.531330 containerd[2017]: time="2025-11-08T00:04:34.531272878Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410766" Nov 8 00:04:34.554515 containerd[2017]: time="2025-11-08T00:04:34.553975858Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:34.568657 containerd[2017]: time="2025-11-08T00:04:34.568574266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:04:34.571532 containerd[2017]: time="2025-11-08T00:04:34.571062118Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 5.899672421s" Nov 8 00:04:34.571532 containerd[2017]: time="2025-11-08T00:04:34.571122994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 8 00:04:38.913712 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:04:41.364682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:41.382779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:41.445264 systemd[1]: Reloading requested from client PID 2724 ('systemctl') (unit session-7.scope)... Nov 8 00:04:41.445460 systemd[1]: Reloading... Nov 8 00:04:41.672095 zram_generator::config[2767]: No configuration found. Nov 8 00:04:41.933538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:04:42.117107 systemd[1]: Reloading finished in 670 ms. Nov 8 00:04:42.221196 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:04:42.221402 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:04:42.222048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:42.226703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:42.533303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:42.546564 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:04:42.621377 kubelet[2827]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:04:42.621900 kubelet[2827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:04:42.623272 kubelet[2827]: I1108 00:04:42.623216 2827 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:04:44.515928 kubelet[2827]: I1108 00:04:44.515874 2827 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:04:44.516826 kubelet[2827]: I1108 00:04:44.516662 2827 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:04:44.519273 kubelet[2827]: I1108 00:04:44.519243 2827 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:04:44.521060 kubelet[2827]: I1108 00:04:44.519393 2827 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:04:44.521060 kubelet[2827]: I1108 00:04:44.519833 2827 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:04:44.529790 kubelet[2827]: E1108 00:04:44.529706 2827 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:04:44.531655 kubelet[2827]: I1108 00:04:44.531595 2827 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:04:44.539698 kubelet[2827]: E1108 00:04:44.539632 2827 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:04:44.539874 kubelet[2827]: I1108 00:04:44.539747 2827 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:04:44.545062 kubelet[2827]: I1108 00:04:44.544998 2827 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:04:44.545724 kubelet[2827]: I1108 00:04:44.545674 2827 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:04:44.546105 kubelet[2827]: I1108 00:04:44.545830 2827 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:04:44.546336 kubelet[2827]: I1108 00:04:44.546315 2827 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:04:44.546440 kubelet[2827]: I1108 00:04:44.546421 2827 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:04:44.546688 kubelet[2827]: I1108 00:04:44.546667 2827 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:04:44.551151 kubelet[2827]: I1108 00:04:44.551118 2827 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:04:44.553790 kubelet[2827]: I1108 00:04:44.553748 2827 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:04:44.553967 kubelet[2827]: I1108 00:04:44.553946 2827 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:04:44.554141 kubelet[2827]: I1108 00:04:44.554122 2827 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:04:44.554258 kubelet[2827]: I1108 00:04:44.554239 2827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:04:44.557048 kubelet[2827]: E1108 00:04:44.556978 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-1&limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:04:44.557412 kubelet[2827]: I1108 00:04:44.557385 2827 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:04:44.558637 kubelet[2827]: I1108 00:04:44.558607 2827 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:04:44.558812 kubelet[2827]: I1108 00:04:44.558792 2827 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:04:44.558963 kubelet[2827]: W1108 00:04:44.558943 2827 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:04:44.563533 kubelet[2827]: I1108 00:04:44.563502 2827 server.go:1262] "Started kubelet" Nov 8 00:04:44.564134 kubelet[2827]: E1108 00:04:44.564089 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:04:44.566681 kubelet[2827]: I1108 00:04:44.566640 2827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:04:44.572769 kubelet[2827]: I1108 00:04:44.572683 2827 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:04:44.574547 kubelet[2827]: I1108 00:04:44.574480 2827 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:04:44.580197 kubelet[2827]: I1108 00:04:44.580152 2827 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:04:44.580608 kubelet[2827]: E1108 00:04:44.580552 2827 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-26-1\" not found" Nov 8 00:04:44.580997 kubelet[2827]: I1108 00:04:44.580965 2827 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:04:44.581200 kubelet[2827]: I1108 00:04:44.581125 2827 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:04:44.584946 kubelet[2827]: I1108 00:04:44.583664 2827 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:04:44.584946 kubelet[2827]: I1108 00:04:44.583818 2827 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:04:44.584946 kubelet[2827]: I1108 00:04:44.584273 2827 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:04:44.585717 kubelet[2827]: E1108 00:04:44.585680 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:04:44.586814 kubelet[2827]: E1108 00:04:44.586671 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-1?timeout=10s\": dial tcp 172.31.26.1:6443: connect: connection refused" interval="200ms" Nov 8 00:04:44.587530 kubelet[2827]: I1108 00:04:44.587476 2827 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:04:44.587654 kubelet[2827]: I1108 00:04:44.587624 2827 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:04:44.593730 kubelet[2827]: E1108 00:04:44.592263 2827 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:04:44.593730 kubelet[2827]: I1108 00:04:44.592433 2827 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:04:44.594535 kubelet[2827]: I1108 00:04:44.594481 2827 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:04:44.595908 kubelet[2827]: E1108 00:04:44.592603 2827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.1:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.1:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-1.1875df3d889dcb10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-1,UID:ip-172-31-26-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-1,},FirstTimestamp:2025-11-08 00:04:44.563458832 +0000 UTC m=+2.010092027,LastTimestamp:2025-11-08 00:04:44.563458832 +0000 UTC m=+2.010092027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-1,}" Nov 8 00:04:44.636181 kubelet[2827]: I1108 00:04:44.636148 2827 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:04:44.636389 kubelet[2827]: I1108 00:04:44.636368 2827 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:04:44.636525 kubelet[2827]: I1108 00:04:44.636508 2827 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:04:44.639793 kubelet[2827]: I1108 00:04:44.639638 2827 policy_none.go:49] "None policy: Start" Nov 8 00:04:44.639793 kubelet[2827]: I1108 00:04:44.639703 2827 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:04:44.639793 kubelet[2827]: I1108 00:04:44.639731 2827 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:04:44.642733 kubelet[2827]: I1108 00:04:44.642581 2827 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:04:44.645084 kubelet[2827]: I1108 00:04:44.644974 2827 policy_none.go:47] "Start" Nov 8 00:04:44.646415 kubelet[2827]: I1108 00:04:44.646371 2827 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:04:44.646415 kubelet[2827]: I1108 00:04:44.646414 2827 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:04:44.646652 kubelet[2827]: I1108 00:04:44.646449 2827 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:04:44.646652 kubelet[2827]: E1108 00:04:44.646511 2827 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:04:44.648907 kubelet[2827]: E1108 00:04:44.648801 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:04:44.661176 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:04:44.679077 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:04:44.681118 kubelet[2827]: E1108 00:04:44.680876 2827 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-26-1\" not found" Nov 8 00:04:44.686460 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:04:44.696110 kubelet[2827]: E1108 00:04:44.696037 2827 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:04:44.696540 kubelet[2827]: I1108 00:04:44.696363 2827 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:04:44.696540 kubelet[2827]: I1108 00:04:44.696398 2827 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:04:44.697976 kubelet[2827]: I1108 00:04:44.697939 2827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:04:44.701274 kubelet[2827]: E1108 00:04:44.701221 2827 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:04:44.701388 kubelet[2827]: E1108 00:04:44.701292 2827 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-1\" not found" Nov 8 00:04:44.724191 kubelet[2827]: E1108 00:04:44.723991 2827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.1:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.1:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-1.1875df3d889dcb10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-1,UID:ip-172-31-26-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-1,},FirstTimestamp:2025-11-08 00:04:44.563458832 +0000 UTC m=+2.010092027,LastTimestamp:2025-11-08 00:04:44.563458832 +0000 UTC m=+2.010092027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-1,}" Nov 8 00:04:44.770620 systemd[1]: Created slice kubepods-burstable-pod6b1a0310510be32ad84c1e1c2a5c30d5.slice - libcontainer container kubepods-burstable-pod6b1a0310510be32ad84c1e1c2a5c30d5.slice. Nov 8 00:04:44.782320 kubelet[2827]: I1108 00:04:44.781710 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:44.782320 kubelet[2827]: I1108 00:04:44.781792 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:44.782320 kubelet[2827]: I1108 00:04:44.781832 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:44.782320 kubelet[2827]: I1108 00:04:44.781872 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26fb849cb067c21ae54d220e570b5ffb-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-1\" (UID: \"26fb849cb067c21ae54d220e570b5ffb\") " pod="kube-system/kube-scheduler-ip-172-31-26-1" Nov 8 00:04:44.782320 kubelet[2827]: I1108 00:04:44.781918 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b1a0310510be32ad84c1e1c2a5c30d5-ca-certs\") pod \"kube-apiserver-ip-172-31-26-1\" (UID: \"6b1a0310510be32ad84c1e1c2a5c30d5\") " pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:44.782670 kubelet[2827]: I1108 00:04:44.781954 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b1a0310510be32ad84c1e1c2a5c30d5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-1\" (UID: \"6b1a0310510be32ad84c1e1c2a5c30d5\") " pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:44.782670 kubelet[2827]: I1108 00:04:44.781989 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b1a0310510be32ad84c1e1c2a5c30d5-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-1\" (UID: \"6b1a0310510be32ad84c1e1c2a5c30d5\") " pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:44.782670 kubelet[2827]: I1108 00:04:44.782069 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:44.782670 kubelet[2827]: I1108 00:04:44.782111 2827 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:44.783692 kubelet[2827]: E1108 00:04:44.783643 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:44.788221 kubelet[2827]: E1108 00:04:44.787860 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-1?timeout=10s\": dial tcp 172.31.26.1:6443: connect: connection refused" interval="400ms" Nov 8 00:04:44.790762 systemd[1]: Created slice kubepods-burstable-pod8086659fde75c1b2ac181384012feb5e.slice - libcontainer container kubepods-burstable-pod8086659fde75c1b2ac181384012feb5e.slice. Nov 8 00:04:44.799656 kubelet[2827]: I1108 00:04:44.799054 2827 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-1" Nov 8 00:04:44.799860 kubelet[2827]: E1108 00:04:44.799739 2827 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.1:6443/api/v1/nodes\": dial tcp 172.31.26.1:6443: connect: connection refused" node="ip-172-31-26-1" Nov 8 00:04:44.802239 kubelet[2827]: E1108 00:04:44.802176 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:44.807719 systemd[1]: Created slice kubepods-burstable-pod26fb849cb067c21ae54d220e570b5ffb.slice - libcontainer container kubepods-burstable-pod26fb849cb067c21ae54d220e570b5ffb.slice. Nov 8 00:04:44.812223 kubelet[2827]: E1108 00:04:44.812176 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:45.002243 kubelet[2827]: I1108 00:04:45.002202 2827 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-1" Nov 8 00:04:45.002709 kubelet[2827]: E1108 00:04:45.002668 2827 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.1:6443/api/v1/nodes\": dial tcp 172.31.26.1:6443: connect: connection refused" node="ip-172-31-26-1" Nov 8 00:04:45.089289 containerd[2017]: time="2025-11-08T00:04:45.089226259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-1,Uid:6b1a0310510be32ad84c1e1c2a5c30d5,Namespace:kube-system,Attempt:0,}" Nov 8 00:04:45.106060 containerd[2017]: time="2025-11-08T00:04:45.105910267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-1,Uid:8086659fde75c1b2ac181384012feb5e,Namespace:kube-system,Attempt:0,}" Nov 8 00:04:45.115614 containerd[2017]: time="2025-11-08T00:04:45.115542631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-1,Uid:26fb849cb067c21ae54d220e570b5ffb,Namespace:kube-system,Attempt:0,}" Nov 8 00:04:45.189389 kubelet[2827]: E1108 00:04:45.189330 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-1?timeout=10s\": dial tcp 172.31.26.1:6443: connect: connection refused" interval="800ms" Nov 8 00:04:45.405962 kubelet[2827]: I1108 00:04:45.405809 2827 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-1" Nov 8 00:04:45.406975 kubelet[2827]: E1108 00:04:45.406882 2827 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.1:6443/api/v1/nodes\": dial tcp 172.31.26.1:6443: connect: connection refused" node="ip-172-31-26-1" Nov 8 00:04:45.544093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467478395.mount: Deactivated successfully. Nov 8 00:04:45.551657 containerd[2017]: time="2025-11-08T00:04:45.551574933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:04:45.557424 containerd[2017]: time="2025-11-08T00:04:45.557357409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Nov 8 00:04:45.558802 containerd[2017]: time="2025-11-08T00:04:45.558716877Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:04:45.561195 containerd[2017]: time="2025-11-08T00:04:45.561108453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:04:45.563000 containerd[2017]: time="2025-11-08T00:04:45.562923885Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:04:45.567039 containerd[2017]: time="2025-11-08T00:04:45.565418829Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:04:45.567293 containerd[2017]: time="2025-11-08T00:04:45.567254949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:04:45.572094 containerd[2017]: time="2025-11-08T00:04:45.571993737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.632442ms" Nov 8 00:04:45.577872 containerd[2017]: time="2025-11-08T00:04:45.577741893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:04:45.591865 containerd[2017]: time="2025-11-08T00:04:45.591801813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.777558ms" Nov 8 00:04:45.593007 containerd[2017]: time="2025-11-08T00:04:45.592957293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.295526ms" Nov 8 00:04:45.682241 kubelet[2827]: E1108 00:04:45.682090 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-1&limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:04:45.861549 containerd[2017]: time="2025-11-08T00:04:45.861166066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:45.861549 containerd[2017]: time="2025-11-08T00:04:45.861515962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:45.861549 containerd[2017]: time="2025-11-08T00:04:45.861627994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:45.864462 containerd[2017]: time="2025-11-08T00:04:45.864227206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:45.870855 containerd[2017]: time="2025-11-08T00:04:45.870711251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:45.872350 containerd[2017]: time="2025-11-08T00:04:45.870811631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:45.872350 containerd[2017]: time="2025-11-08T00:04:45.870838727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:45.872350 containerd[2017]: time="2025-11-08T00:04:45.870999311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:45.880780 containerd[2017]: time="2025-11-08T00:04:45.880640123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:04:45.881898 containerd[2017]: time="2025-11-08T00:04:45.881807183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:04:45.882286 containerd[2017]: time="2025-11-08T00:04:45.882217775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:45.882520 containerd[2017]: time="2025-11-08T00:04:45.882460307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:04:45.896195 kubelet[2827]: E1108 00:04:45.896141 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.1:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:04:45.910453 systemd[1]: Started cri-containerd-fe1ba82c1e7b1f83ee5c2abdc420ed1578f4c020e9c08bf9366df8812d891622.scope - libcontainer container fe1ba82c1e7b1f83ee5c2abdc420ed1578f4c020e9c08bf9366df8812d891622. Nov 8 00:04:45.948399 systemd[1]: Started cri-containerd-23a686dd900185ad8eedb09ebbed88856a5eeb52752fa0976393417798161645.scope - libcontainer container 23a686dd900185ad8eedb09ebbed88856a5eeb52752fa0976393417798161645. Nov 8 00:04:45.964401 systemd[1]: Started cri-containerd-9d6e383b0f47bf72abc7c1d175b043077b52552c1f6f161ff250a5eb925207c8.scope - libcontainer container 9d6e383b0f47bf72abc7c1d175b043077b52552c1f6f161ff250a5eb925207c8. Nov 8 00:04:45.992608 kubelet[2827]: E1108 00:04:45.990777 2827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-1?timeout=10s\": dial tcp 172.31.26.1:6443: connect: connection refused" interval="1.6s" Nov 8 00:04:46.034785 containerd[2017]: time="2025-11-08T00:04:46.034450267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-1,Uid:6b1a0310510be32ad84c1e1c2a5c30d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe1ba82c1e7b1f83ee5c2abdc420ed1578f4c020e9c08bf9366df8812d891622\"" Nov 8 00:04:46.047662 containerd[2017]: time="2025-11-08T00:04:46.047576671Z" level=info msg="CreateContainer within sandbox \"fe1ba82c1e7b1f83ee5c2abdc420ed1578f4c020e9c08bf9366df8812d891622\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:04:46.066769 kubelet[2827]: E1108 00:04:46.066709 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:04:46.094616 containerd[2017]: time="2025-11-08T00:04:46.094561916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-1,Uid:8086659fde75c1b2ac181384012feb5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a686dd900185ad8eedb09ebbed88856a5eeb52752fa0976393417798161645\"" Nov 8 00:04:46.095432 containerd[2017]: time="2025-11-08T00:04:46.095319908Z" level=info msg="CreateContainer within sandbox \"fe1ba82c1e7b1f83ee5c2abdc420ed1578f4c020e9c08bf9366df8812d891622\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd3bd1a9831dd5de868a67e976f3563abb1a458289ee92e5650a03f0b10c09d0\"" Nov 8 00:04:46.099048 containerd[2017]: time="2025-11-08T00:04:46.096946616Z" level=info msg="StartContainer for \"dd3bd1a9831dd5de868a67e976f3563abb1a458289ee92e5650a03f0b10c09d0\"" Nov 8 00:04:46.110507 containerd[2017]: time="2025-11-08T00:04:46.110427704Z" level=info msg="CreateContainer within sandbox \"23a686dd900185ad8eedb09ebbed88856a5eeb52752fa0976393417798161645\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:04:46.120643 containerd[2017]: time="2025-11-08T00:04:46.120559376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-1,Uid:26fb849cb067c21ae54d220e570b5ffb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d6e383b0f47bf72abc7c1d175b043077b52552c1f6f161ff250a5eb925207c8\"" Nov 8 00:04:46.131118 containerd[2017]: time="2025-11-08T00:04:46.131065016Z" level=info msg="CreateContainer within sandbox \"9d6e383b0f47bf72abc7c1d175b043077b52552c1f6f161ff250a5eb925207c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:04:46.153480 containerd[2017]: time="2025-11-08T00:04:46.153426452Z" level=info msg="CreateContainer within sandbox \"23a686dd900185ad8eedb09ebbed88856a5eeb52752fa0976393417798161645\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13\"" Nov 8 00:04:46.156759 containerd[2017]: time="2025-11-08T00:04:46.156708668Z" level=info msg="StartContainer for \"58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13\"" Nov 8 00:04:46.161365 systemd[1]: Started cri-containerd-dd3bd1a9831dd5de868a67e976f3563abb1a458289ee92e5650a03f0b10c09d0.scope - libcontainer container dd3bd1a9831dd5de868a67e976f3563abb1a458289ee92e5650a03f0b10c09d0. Nov 8 00:04:46.164961 kubelet[2827]: E1108 00:04:46.164886 2827 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.1:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:04:46.187346 containerd[2017]: time="2025-11-08T00:04:46.187268756Z" level=info msg="CreateContainer within sandbox \"9d6e383b0f47bf72abc7c1d175b043077b52552c1f6f161ff250a5eb925207c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac\"" Nov 8 00:04:46.188682 containerd[2017]: time="2025-11-08T00:04:46.188637872Z" level=info msg="StartContainer for \"96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac\"" Nov 8 00:04:46.212532 kubelet[2827]: I1108 00:04:46.211138 2827 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-1" Nov 8 00:04:46.215818 kubelet[2827]: E1108 00:04:46.215628 2827 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.1:6443/api/v1/nodes\": dial tcp 172.31.26.1:6443: connect: connection refused" node="ip-172-31-26-1" Nov 8 00:04:46.241439 systemd[1]: Started cri-containerd-58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13.scope - libcontainer container 58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13. Nov 8 00:04:46.291328 containerd[2017]: time="2025-11-08T00:04:46.291272517Z" level=info msg="StartContainer for \"dd3bd1a9831dd5de868a67e976f3563abb1a458289ee92e5650a03f0b10c09d0\" returns successfully" Nov 8 00:04:46.291688 systemd[1]: Started cri-containerd-96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac.scope - libcontainer container 96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac. Nov 8 00:04:46.382594 containerd[2017]: time="2025-11-08T00:04:46.382515957Z" level=info msg="StartContainer for \"58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13\" returns successfully" Nov 8 00:04:46.419101 containerd[2017]: time="2025-11-08T00:04:46.418991985Z" level=info msg="StartContainer for \"96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac\" returns successfully" Nov 8 00:04:46.669879 kubelet[2827]: E1108 00:04:46.669822 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:46.678217 kubelet[2827]: E1108 00:04:46.678167 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:46.690617 kubelet[2827]: E1108 00:04:46.690187 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:47.692219 kubelet[2827]: E1108 00:04:47.691816 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:47.693003 kubelet[2827]: E1108 00:04:47.692142 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:47.820081 kubelet[2827]: I1108 00:04:47.819085 2827 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-1" Nov 8 00:04:49.826882 kubelet[2827]: E1108 00:04:49.826835 2827 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:50.470411 kubelet[2827]: E1108 00:04:50.470347 2827 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-1\" not found" node="ip-172-31-26-1" Nov 8 00:04:50.561333 kubelet[2827]: I1108 00:04:50.560987 2827 apiserver.go:52] "Watching apiserver" Nov 8 00:04:50.582167 kubelet[2827]: I1108 00:04:50.582096 2827 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:04:50.718685 kubelet[2827]: I1108 00:04:50.718601 2827 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-1" Nov 8 00:04:50.718685 kubelet[2827]: E1108 00:04:50.718655 2827 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-1\": node \"ip-172-31-26-1\" not found" Nov 8 00:04:50.781960 kubelet[2827]: I1108 00:04:50.781464 2827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:50.832917 kubelet[2827]: E1108 00:04:50.832583 2827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:50.832917 kubelet[2827]: I1108 00:04:50.832627 2827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:50.850567 kubelet[2827]: E1108 00:04:50.850512 2827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:50.850567 kubelet[2827]: I1108 00:04:50.850561 2827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-1" Nov 8 00:04:50.859043 kubelet[2827]: E1108 00:04:50.858405 2827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-1" Nov 8 00:04:50.928336 kubelet[2827]: I1108 00:04:50.928275 2827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:50.936437 kubelet[2827]: E1108 00:04:50.936376 2827 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:52.272134 kubelet[2827]: I1108 00:04:52.271693 2827 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:52.696141 update_engine[1993]: I20251108 00:04:52.696069 1993 update_attempter.cc:509] Updating boot flags... Nov 8 00:04:52.830514 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3124) Nov 8 00:04:53.210087 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3123) Nov 8 00:04:53.635073 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3123) Nov 8 00:04:54.366477 systemd[1]: Reloading requested from client PID 3378 ('systemctl') (unit session-7.scope)... Nov 8 00:04:54.366923 systemd[1]: Reloading... Nov 8 00:04:54.555226 zram_generator::config[3424]: No configuration found. Nov 8 00:04:54.684728 kubelet[2827]: I1108 00:04:54.684190 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-1" podStartSLOduration=2.684112314 podStartE2EDuration="2.684112314s" podCreationTimestamp="2025-11-08 00:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:04:54.683961366 +0000 UTC m=+12.130594549" watchObservedRunningTime="2025-11-08 00:04:54.684112314 +0000 UTC m=+12.130745497" Nov 8 00:04:54.852776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:04:55.073218 systemd[1]: Reloading finished in 705 ms. Nov 8 00:04:55.148725 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:55.166573 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:04:55.167077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:55.167171 systemd[1]: kubelet.service: Consumed 2.916s CPU time, 125.0M memory peak, 0B memory swap peak. Nov 8 00:04:55.174568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:04:55.553352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:04:55.568066 (kubelet)[3480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:04:55.671147 kubelet[3480]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:04:55.672051 kubelet[3480]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:04:55.672051 kubelet[3480]: I1108 00:04:55.671946 3480 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:04:55.695086 kubelet[3480]: I1108 00:04:55.695043 3480 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:04:55.695249 kubelet[3480]: I1108 00:04:55.695230 3480 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:04:55.696089 kubelet[3480]: I1108 00:04:55.695401 3480 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:04:55.696089 kubelet[3480]: I1108 00:04:55.695421 3480 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:04:55.696089 kubelet[3480]: I1108 00:04:55.695792 3480 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:04:55.700763 kubelet[3480]: I1108 00:04:55.700604 3480 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:04:55.712934 kubelet[3480]: I1108 00:04:55.712872 3480 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:04:55.722436 kubelet[3480]: E1108 00:04:55.722353 3480 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:04:55.722940 kubelet[3480]: I1108 00:04:55.722460 3480 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:04:55.732703 kubelet[3480]: I1108 00:04:55.732615 3480 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:04:55.733276 kubelet[3480]: I1108 00:04:55.733208 3480 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:04:55.733530 kubelet[3480]: I1108 00:04:55.733268 3480 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:04:55.733681 kubelet[3480]: I1108 00:04:55.733530 3480 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:04:55.733681 kubelet[3480]: I1108 00:04:55.733551 3480 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:04:55.733681 kubelet[3480]: I1108 00:04:55.733597 3480 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:04:55.735719 kubelet[3480]: I1108 00:04:55.735646 3480 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:04:55.736098 kubelet[3480]: I1108 00:04:55.735989 3480 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:04:55.738393 kubelet[3480]: I1108 00:04:55.738069 3480 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:04:55.738393 kubelet[3480]: I1108 00:04:55.738153 3480 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:04:55.738393 kubelet[3480]: I1108 00:04:55.738188 3480 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:04:55.763064 kubelet[3480]: I1108 00:04:55.763008 3480 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:04:55.764379 kubelet[3480]: I1108 00:04:55.764344 3480 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:04:55.764557 kubelet[3480]: I1108 00:04:55.764537 3480 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:04:55.769841 kubelet[3480]: I1108 00:04:55.769557 3480 server.go:1262] "Started kubelet" Nov 8 00:04:55.775433 kubelet[3480]: I1108 00:04:55.775398 3480 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:04:55.793896 kubelet[3480]: I1108 00:04:55.793841 3480 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:04:55.796039 kubelet[3480]: I1108 00:04:55.795490 3480 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:04:55.803071 kubelet[3480]: I1108 00:04:55.802912 3480 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:04:55.803285 kubelet[3480]: I1108 00:04:55.803250 3480 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:04:55.803639 kubelet[3480]: I1108 00:04:55.803614 3480 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:04:55.805181 kubelet[3480]: I1108 00:04:55.805144 3480 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:04:55.812525 kubelet[3480]: I1108 00:04:55.810846 3480 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:04:55.813638 kubelet[3480]: E1108 00:04:55.813536 3480 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-26-1\" not found" Nov 8 00:04:55.815960 kubelet[3480]: I1108 00:04:55.815893 3480 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:04:55.816617 kubelet[3480]: I1108 00:04:55.816553 3480 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:04:55.818842 kubelet[3480]: I1108 00:04:55.818802 3480 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:04:55.820078 kubelet[3480]: I1108 00:04:55.819303 3480 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:04:55.828640 kubelet[3480]: I1108 00:04:55.828576 3480 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:04:55.832295 kubelet[3480]: E1108 00:04:55.829785 3480 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:04:55.852104 kubelet[3480]: I1108 00:04:55.852005 3480 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:04:55.856702 kubelet[3480]: I1108 00:04:55.856664 3480 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:04:55.856879 kubelet[3480]: I1108 00:04:55.856859 3480 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:04:55.856997 kubelet[3480]: I1108 00:04:55.856978 3480 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:04:55.858524 kubelet[3480]: E1108 00:04:55.858443 3480 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:04:55.958867 kubelet[3480]: E1108 00:04:55.958644 3480 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:04:55.960433 kubelet[3480]: I1108 00:04:55.960128 3480 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:04:55.960433 kubelet[3480]: I1108 00:04:55.960361 3480 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:04:55.960433 kubelet[3480]: I1108 00:04:55.960395 3480 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:04:55.962602 kubelet[3480]: I1108 00:04:55.962166 3480 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:04:55.962602 kubelet[3480]: I1108 00:04:55.962199 3480 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:04:55.962602 kubelet[3480]: I1108 00:04:55.962234 3480 policy_none.go:49] "None policy: Start" Nov 8 00:04:55.962602 kubelet[3480]: I1108 00:04:55.962253 3480 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:04:55.962602 kubelet[3480]: I1108 00:04:55.962275 3480 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:04:55.962602 kubelet[3480]: I1108 00:04:55.962463 3480 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:04:55.962602 kubelet[3480]: I1108 00:04:55.962480 3480 policy_none.go:47] "Start" Nov 8 00:04:55.976243 kubelet[3480]: E1108 00:04:55.976182 3480 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:04:55.976513 kubelet[3480]: I1108 00:04:55.976472 3480 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:04:55.976577 kubelet[3480]: I1108 00:04:55.976511 3480 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:04:55.978129 kubelet[3480]: I1108 00:04:55.977685 3480 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:04:55.983564 kubelet[3480]: E1108 00:04:55.981504 3480 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:04:56.101451 kubelet[3480]: I1108 00:04:56.101122 3480 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-1" Nov 8 00:04:56.117990 kubelet[3480]: I1108 00:04:56.117923 3480 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-1" Nov 8 00:04:56.118181 kubelet[3480]: I1108 00:04:56.118072 3480 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-1" Nov 8 00:04:56.163724 kubelet[3480]: I1108 00:04:56.160797 3480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-1" Nov 8 00:04:56.163724 kubelet[3480]: I1108 00:04:56.161416 3480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:56.163724 kubelet[3480]: I1108 00:04:56.161914 3480 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:56.174968 kubelet[3480]: E1108 00:04:56.174926 3480 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-1\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:56.218640 kubelet[3480]: I1108 00:04:56.218592 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:56.219590 kubelet[3480]: I1108 00:04:56.218900 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:56.219826 kubelet[3480]: I1108 00:04:56.219780 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26fb849cb067c21ae54d220e570b5ffb-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-1\" (UID: \"26fb849cb067c21ae54d220e570b5ffb\") " pod="kube-system/kube-scheduler-ip-172-31-26-1" Nov 8 00:04:56.220028 kubelet[3480]: I1108 00:04:56.219971 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b1a0310510be32ad84c1e1c2a5c30d5-ca-certs\") pod \"kube-apiserver-ip-172-31-26-1\" (UID: \"6b1a0310510be32ad84c1e1c2a5c30d5\") " pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:56.220845 kubelet[3480]: I1108 00:04:56.220175 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b1a0310510be32ad84c1e1c2a5c30d5-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-1\" (UID: \"6b1a0310510be32ad84c1e1c2a5c30d5\") " pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:56.221064 kubelet[3480]: I1108 00:04:56.220993 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b1a0310510be32ad84c1e1c2a5c30d5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-1\" (UID: \"6b1a0310510be32ad84c1e1c2a5c30d5\") " pod="kube-system/kube-apiserver-ip-172-31-26-1" Nov 8 00:04:56.221258 kubelet[3480]: I1108 00:04:56.221211 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:56.221478 kubelet[3480]: I1108 00:04:56.221419 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:56.221690 kubelet[3480]: I1108 00:04:56.221602 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8086659fde75c1b2ac181384012feb5e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-1\" (UID: \"8086659fde75c1b2ac181384012feb5e\") " pod="kube-system/kube-controller-manager-ip-172-31-26-1" Nov 8 00:04:56.740812 kubelet[3480]: I1108 00:04:56.740751 3480 apiserver.go:52] "Watching apiserver" Nov 8 00:04:56.817223 kubelet[3480]: I1108 00:04:56.817152 3480 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:04:56.961554 kubelet[3480]: I1108 00:04:56.961428 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-1" podStartSLOduration=0.961350058 podStartE2EDuration="961.350058ms" podCreationTimestamp="2025-11-08 00:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:04:56.959615182 +0000 UTC m=+1.383305192" watchObservedRunningTime="2025-11-08 00:04:56.961350058 +0000 UTC m=+1.385040068" Nov 8 00:04:56.977581 kubelet[3480]: I1108 00:04:56.975450 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-1" podStartSLOduration=0.97542781 podStartE2EDuration="975.42781ms" podCreationTimestamp="2025-11-08 00:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:04:56.971717938 +0000 UTC m=+1.395407984" watchObservedRunningTime="2025-11-08 00:04:56.97542781 +0000 UTC m=+1.399117820" Nov 8 00:05:00.524129 kubelet[3480]: I1108 00:05:00.524077 3480 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:05:00.525096 kubelet[3480]: I1108 00:05:00.524939 3480 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:05:00.525244 containerd[2017]: time="2025-11-08T00:05:00.524600291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:05:01.269235 systemd[1]: Created slice kubepods-besteffort-pod3e24c9b7_9282_4bb6_9063_a18538759df4.slice - libcontainer container kubepods-besteffort-pod3e24c9b7_9282_4bb6_9063_a18538759df4.slice. Nov 8 00:05:01.360156 kubelet[3480]: I1108 00:05:01.359535 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e24c9b7-9282-4bb6-9063-a18538759df4-xtables-lock\") pod \"kube-proxy-zc4kk\" (UID: \"3e24c9b7-9282-4bb6-9063-a18538759df4\") " pod="kube-system/kube-proxy-zc4kk" Nov 8 00:05:01.360156 kubelet[3480]: I1108 00:05:01.359597 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e24c9b7-9282-4bb6-9063-a18538759df4-lib-modules\") pod \"kube-proxy-zc4kk\" (UID: \"3e24c9b7-9282-4bb6-9063-a18538759df4\") " pod="kube-system/kube-proxy-zc4kk" Nov 8 00:05:01.360156 kubelet[3480]: I1108 00:05:01.359643 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndl4\" (UniqueName: \"kubernetes.io/projected/3e24c9b7-9282-4bb6-9063-a18538759df4-kube-api-access-gndl4\") pod \"kube-proxy-zc4kk\" (UID: \"3e24c9b7-9282-4bb6-9063-a18538759df4\") " pod="kube-system/kube-proxy-zc4kk" Nov 8 00:05:01.360156 kubelet[3480]: I1108 00:05:01.359693 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e24c9b7-9282-4bb6-9063-a18538759df4-kube-proxy\") pod \"kube-proxy-zc4kk\" (UID: \"3e24c9b7-9282-4bb6-9063-a18538759df4\") " pod="kube-system/kube-proxy-zc4kk" Nov 8 00:05:01.588342 containerd[2017]: time="2025-11-08T00:05:01.588270733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zc4kk,Uid:3e24c9b7-9282-4bb6-9063-a18538759df4,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:01.653903 containerd[2017]: time="2025-11-08T00:05:01.652548457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:01.654218 containerd[2017]: time="2025-11-08T00:05:01.653974585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:01.654359 containerd[2017]: time="2025-11-08T00:05:01.654270805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:01.655050 containerd[2017]: time="2025-11-08T00:05:01.654730813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:01.714345 systemd[1]: Started cri-containerd-2e4292a9ee93c299532a36f551792f5d991b98ada51d3d09bc33818e0474fcf4.scope - libcontainer container 2e4292a9ee93c299532a36f551792f5d991b98ada51d3d09bc33818e0474fcf4. Nov 8 00:05:01.738663 systemd[1]: Created slice kubepods-besteffort-podbb208d47_3f7e_4019_b4b2_c4fc3fc07165.slice - libcontainer container kubepods-besteffort-podbb208d47_3f7e_4019_b4b2_c4fc3fc07165.slice. Nov 8 00:05:01.763855 kubelet[3480]: I1108 00:05:01.763340 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4snst\" (UniqueName: \"kubernetes.io/projected/bb208d47-3f7e-4019-b4b2-c4fc3fc07165-kube-api-access-4snst\") pod \"tigera-operator-65cdcdfd6d-m8dkj\" (UID: \"bb208d47-3f7e-4019-b4b2-c4fc3fc07165\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-m8dkj" Nov 8 00:05:01.763855 kubelet[3480]: I1108 00:05:01.763428 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb208d47-3f7e-4019-b4b2-c4fc3fc07165-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-m8dkj\" (UID: \"bb208d47-3f7e-4019-b4b2-c4fc3fc07165\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-m8dkj" Nov 8 00:05:01.833194 containerd[2017]: time="2025-11-08T00:05:01.833088794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zc4kk,Uid:3e24c9b7-9282-4bb6-9063-a18538759df4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e4292a9ee93c299532a36f551792f5d991b98ada51d3d09bc33818e0474fcf4\"" Nov 8 00:05:01.847861 containerd[2017]: time="2025-11-08T00:05:01.847642634Z" level=info msg="CreateContainer within sandbox \"2e4292a9ee93c299532a36f551792f5d991b98ada51d3d09bc33818e0474fcf4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:05:01.887116 containerd[2017]: time="2025-11-08T00:05:01.886929194Z" level=info msg="CreateContainer within sandbox \"2e4292a9ee93c299532a36f551792f5d991b98ada51d3d09bc33818e0474fcf4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3ff58c619663089e486d7c832bcd641e3f0476f11df6fc7055277e97db15570c\"" Nov 8 00:05:01.893184 containerd[2017]: time="2025-11-08T00:05:01.889386710Z" level=info msg="StartContainer for \"3ff58c619663089e486d7c832bcd641e3f0476f11df6fc7055277e97db15570c\"" Nov 8 00:05:01.941347 systemd[1]: Started cri-containerd-3ff58c619663089e486d7c832bcd641e3f0476f11df6fc7055277e97db15570c.scope - libcontainer container 3ff58c619663089e486d7c832bcd641e3f0476f11df6fc7055277e97db15570c. Nov 8 00:05:01.996665 containerd[2017]: time="2025-11-08T00:05:01.996604875Z" level=info msg="StartContainer for \"3ff58c619663089e486d7c832bcd641e3f0476f11df6fc7055277e97db15570c\" returns successfully" Nov 8 00:05:02.053215 containerd[2017]: time="2025-11-08T00:05:02.053140667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-m8dkj,Uid:bb208d47-3f7e-4019-b4b2-c4fc3fc07165,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:05:02.130460 containerd[2017]: time="2025-11-08T00:05:02.129952031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:02.130460 containerd[2017]: time="2025-11-08T00:05:02.130100627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:02.130460 containerd[2017]: time="2025-11-08T00:05:02.130139171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:02.131632 containerd[2017]: time="2025-11-08T00:05:02.131158979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:02.164656 systemd[1]: Started cri-containerd-c3c2d44d17786017114d15be7cd9fb702be9f5dbb660bf8527539333bc83ffa1.scope - libcontainer container c3c2d44d17786017114d15be7cd9fb702be9f5dbb660bf8527539333bc83ffa1. Nov 8 00:05:02.250494 containerd[2017]: time="2025-11-08T00:05:02.250438692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-m8dkj,Uid:bb208d47-3f7e-4019-b4b2-c4fc3fc07165,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c3c2d44d17786017114d15be7cd9fb702be9f5dbb660bf8527539333bc83ffa1\"" Nov 8 00:05:02.254569 containerd[2017]: time="2025-11-08T00:05:02.254512428Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:05:02.975099 kubelet[3480]: I1108 00:05:02.974453 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zc4kk" podStartSLOduration=1.974429895 podStartE2EDuration="1.974429895s" podCreationTimestamp="2025-11-08 00:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:05:02.955752291 +0000 UTC m=+7.379442337" watchObservedRunningTime="2025-11-08 00:05:02.974429895 +0000 UTC m=+7.398119917" Nov 8 00:05:04.759453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319196249.mount: Deactivated successfully. Nov 8 00:05:06.846216 containerd[2017]: time="2025-11-08T00:05:06.846124027Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:06.848675 containerd[2017]: time="2025-11-08T00:05:06.848266615Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 8 00:05:06.851540 containerd[2017]: time="2025-11-08T00:05:06.850830571Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:06.856243 containerd[2017]: time="2025-11-08T00:05:06.856180315Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:06.858171 containerd[2017]: time="2025-11-08T00:05:06.857971615Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 4.603392311s" Nov 8 00:05:06.858347 containerd[2017]: time="2025-11-08T00:05:06.858169675Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 8 00:05:06.868443 containerd[2017]: time="2025-11-08T00:05:06.868380583Z" level=info msg="CreateContainer within sandbox \"c3c2d44d17786017114d15be7cd9fb702be9f5dbb660bf8527539333bc83ffa1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:05:06.899393 containerd[2017]: time="2025-11-08T00:05:06.899180839Z" level=info msg="CreateContainer within sandbox \"c3c2d44d17786017114d15be7cd9fb702be9f5dbb660bf8527539333bc83ffa1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3\"" Nov 8 00:05:06.904774 containerd[2017]: time="2025-11-08T00:05:06.902225191Z" level=info msg="StartContainer for \"63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3\"" Nov 8 00:05:06.970379 systemd[1]: Started cri-containerd-63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3.scope - libcontainer container 63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3. Nov 8 00:05:07.023441 containerd[2017]: time="2025-11-08T00:05:07.023362504Z" level=info msg="StartContainer for \"63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3\" returns successfully" Nov 8 00:05:07.985939 kubelet[3480]: I1108 00:05:07.985799 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-m8dkj" podStartSLOduration=2.378116017 podStartE2EDuration="6.985774076s" podCreationTimestamp="2025-11-08 00:05:01 +0000 UTC" firstStartedPulling="2025-11-08 00:05:02.25302792 +0000 UTC m=+6.676717942" lastFinishedPulling="2025-11-08 00:05:06.860685991 +0000 UTC m=+11.284376001" observedRunningTime="2025-11-08 00:05:07.982746176 +0000 UTC m=+12.406436234" watchObservedRunningTime="2025-11-08 00:05:07.985774076 +0000 UTC m=+12.409464098" Nov 8 00:05:16.326404 sudo[2333]: pam_unix(sudo:session): session closed for user root Nov 8 00:05:16.351077 sshd[2329]: pam_unix(sshd:session): session closed for user core Nov 8 00:05:16.359719 systemd[1]: sshd@6-172.31.26.1:22-139.178.89.65:37466.service: Deactivated successfully. Nov 8 00:05:16.367864 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:05:16.372425 systemd[1]: session-7.scope: Consumed 10.540s CPU time, 151.8M memory peak, 0B memory swap peak. Nov 8 00:05:16.377869 systemd-logind[1992]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:05:16.382519 systemd-logind[1992]: Removed session 7. Nov 8 00:05:37.342750 systemd[1]: Created slice kubepods-besteffort-pod10fca90c_bce8_46fa_b9c3_56fff8824398.slice - libcontainer container kubepods-besteffort-pod10fca90c_bce8_46fa_b9c3_56fff8824398.slice. Nov 8 00:05:37.413717 kubelet[3480]: I1108 00:05:37.413418 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/10fca90c-bce8-46fa-b9c3-56fff8824398-typha-certs\") pod \"calico-typha-548df5774-9gdmr\" (UID: \"10fca90c-bce8-46fa-b9c3-56fff8824398\") " pod="calico-system/calico-typha-548df5774-9gdmr" Nov 8 00:05:37.413717 kubelet[3480]: I1108 00:05:37.413488 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54l82\" (UniqueName: \"kubernetes.io/projected/10fca90c-bce8-46fa-b9c3-56fff8824398-kube-api-access-54l82\") pod \"calico-typha-548df5774-9gdmr\" (UID: \"10fca90c-bce8-46fa-b9c3-56fff8824398\") " pod="calico-system/calico-typha-548df5774-9gdmr" Nov 8 00:05:37.413717 kubelet[3480]: I1108 00:05:37.413556 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10fca90c-bce8-46fa-b9c3-56fff8824398-tigera-ca-bundle\") pod \"calico-typha-548df5774-9gdmr\" (UID: \"10fca90c-bce8-46fa-b9c3-56fff8824398\") " pod="calico-system/calico-typha-548df5774-9gdmr" Nov 8 00:05:37.541782 systemd[1]: Created slice kubepods-besteffort-pod4a431b9c_fa06_48d8_838d_9c2137687c30.slice - libcontainer container kubepods-besteffort-pod4a431b9c_fa06_48d8_838d_9c2137687c30.slice. Nov 8 00:05:37.615274 kubelet[3480]: I1108 00:05:37.615109 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-flexvol-driver-host\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.615274 kubelet[3480]: I1108 00:05:37.615182 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-var-run-calico\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.615274 kubelet[3480]: I1108 00:05:37.615223 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-xtables-lock\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.617556 kubelet[3480]: I1108 00:05:37.617384 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkmb\" (UniqueName: \"kubernetes.io/projected/4a431b9c-fa06-48d8-838d-9c2137687c30-kube-api-access-bkkmb\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.617775 kubelet[3480]: I1108 00:05:37.617589 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-cni-net-dir\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.618087 kubelet[3480]: I1108 00:05:37.617913 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4a431b9c-fa06-48d8-838d-9c2137687c30-node-certs\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.618087 kubelet[3480]: I1108 00:05:37.617963 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-policysync\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.618087 kubelet[3480]: I1108 00:05:37.618004 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a431b9c-fa06-48d8-838d-9c2137687c30-tigera-ca-bundle\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.618585 kubelet[3480]: I1108 00:05:37.618329 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-var-lib-calico\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.618585 kubelet[3480]: I1108 00:05:37.618412 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-lib-modules\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.618585 kubelet[3480]: I1108 00:05:37.618474 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-cni-bin-dir\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.618585 kubelet[3480]: I1108 00:05:37.618519 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4a431b9c-fa06-48d8-838d-9c2137687c30-cni-log-dir\") pod \"calico-node-7nrhr\" (UID: \"4a431b9c-fa06-48d8-838d-9c2137687c30\") " pod="calico-system/calico-node-7nrhr" Nov 8 00:05:37.633146 kubelet[3480]: E1108 00:05:37.633068 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:37.653898 containerd[2017]: time="2025-11-08T00:05:37.653826144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-548df5774-9gdmr,Uid:10fca90c-bce8-46fa-b9c3-56fff8824398,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:37.717112 containerd[2017]: time="2025-11-08T00:05:37.713511348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:37.717112 containerd[2017]: time="2025-11-08T00:05:37.713698512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:37.717112 containerd[2017]: time="2025-11-08T00:05:37.713725488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:37.717112 containerd[2017]: time="2025-11-08T00:05:37.713894976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:37.721058 kubelet[3480]: I1108 00:05:37.719341 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8lb\" (UniqueName: \"kubernetes.io/projected/105dae3d-b44c-41c4-b31a-bd1432c68a75-kube-api-access-zc8lb\") pod \"csi-node-driver-rkzrr\" (UID: \"105dae3d-b44c-41c4-b31a-bd1432c68a75\") " pod="calico-system/csi-node-driver-rkzrr" Nov 8 00:05:37.721058 kubelet[3480]: I1108 00:05:37.719408 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/105dae3d-b44c-41c4-b31a-bd1432c68a75-varrun\") pod \"csi-node-driver-rkzrr\" (UID: \"105dae3d-b44c-41c4-b31a-bd1432c68a75\") " pod="calico-system/csi-node-driver-rkzrr" Nov 8 00:05:37.721058 kubelet[3480]: I1108 00:05:37.719521 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/105dae3d-b44c-41c4-b31a-bd1432c68a75-kubelet-dir\") pod \"csi-node-driver-rkzrr\" (UID: \"105dae3d-b44c-41c4-b31a-bd1432c68a75\") " pod="calico-system/csi-node-driver-rkzrr" Nov 8 00:05:37.721058 kubelet[3480]: I1108 00:05:37.719657 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/105dae3d-b44c-41c4-b31a-bd1432c68a75-registration-dir\") pod \"csi-node-driver-rkzrr\" (UID: \"105dae3d-b44c-41c4-b31a-bd1432c68a75\") " pod="calico-system/csi-node-driver-rkzrr" Nov 8 00:05:37.721058 kubelet[3480]: I1108 00:05:37.719697 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/105dae3d-b44c-41c4-b31a-bd1432c68a75-socket-dir\") pod \"csi-node-driver-rkzrr\" (UID: \"105dae3d-b44c-41c4-b31a-bd1432c68a75\") " pod="calico-system/csi-node-driver-rkzrr" Nov 8 00:05:37.736109 kubelet[3480]: E1108 00:05:37.733816 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.736109 kubelet[3480]: W1108 00:05:37.733877 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.736109 kubelet[3480]: E1108 00:05:37.733915 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.756714 kubelet[3480]: E1108 00:05:37.756254 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.756714 kubelet[3480]: W1108 00:05:37.756314 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.756714 kubelet[3480]: E1108 00:05:37.756350 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.808786 kubelet[3480]: E1108 00:05:37.808335 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.808786 kubelet[3480]: W1108 00:05:37.808370 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.808786 kubelet[3480]: E1108 00:05:37.808404 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.814372 systemd[1]: Started cri-containerd-331478785150a9bcf4b1f77a5811faf0d8509ecc428834f573402715770fa538.scope - libcontainer container 331478785150a9bcf4b1f77a5811faf0d8509ecc428834f573402715770fa538. Nov 8 00:05:37.821816 kubelet[3480]: E1108 00:05:37.821758 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.821816 kubelet[3480]: W1108 00:05:37.821798 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.822057 kubelet[3480]: E1108 00:05:37.821833 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.824203 kubelet[3480]: E1108 00:05:37.824035 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.824203 kubelet[3480]: W1108 00:05:37.824195 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.824574 kubelet[3480]: E1108 00:05:37.824230 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.826400 kubelet[3480]: E1108 00:05:37.826309 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.826400 kubelet[3480]: W1108 00:05:37.826344 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.826400 kubelet[3480]: E1108 00:05:37.826379 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.828679 kubelet[3480]: E1108 00:05:37.828626 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.828679 kubelet[3480]: W1108 00:05:37.828661 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.830731 kubelet[3480]: E1108 00:05:37.828697 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.832652 kubelet[3480]: E1108 00:05:37.832590 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.832652 kubelet[3480]: W1108 00:05:37.832629 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.832903 kubelet[3480]: E1108 00:05:37.832664 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.833482 kubelet[3480]: E1108 00:05:37.833444 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.833482 kubelet[3480]: W1108 00:05:37.833475 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.834258 kubelet[3480]: E1108 00:05:37.833505 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.834258 kubelet[3480]: E1108 00:05:37.833950 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.834258 kubelet[3480]: W1108 00:05:37.833969 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.834258 kubelet[3480]: E1108 00:05:37.833993 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.834258 kubelet[3480]: E1108 00:05:37.834379 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.835775 kubelet[3480]: W1108 00:05:37.834399 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.835775 kubelet[3480]: E1108 00:05:37.834428 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.835775 kubelet[3480]: E1108 00:05:37.834857 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.835775 kubelet[3480]: W1108 00:05:37.834876 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.835775 kubelet[3480]: E1108 00:05:37.834900 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.835775 kubelet[3480]: E1108 00:05:37.835269 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.835775 kubelet[3480]: W1108 00:05:37.835287 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.835775 kubelet[3480]: E1108 00:05:37.835308 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.836525 kubelet[3480]: E1108 00:05:37.836065 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.836525 kubelet[3480]: W1108 00:05:37.836113 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.836525 kubelet[3480]: E1108 00:05:37.836138 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.839295 kubelet[3480]: E1108 00:05:37.838168 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.839295 kubelet[3480]: W1108 00:05:37.838238 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.839295 kubelet[3480]: E1108 00:05:37.838271 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.839295 kubelet[3480]: E1108 00:05:37.838898 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.839295 kubelet[3480]: W1108 00:05:37.838918 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.839295 kubelet[3480]: E1108 00:05:37.839064 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.840426 kubelet[3480]: E1108 00:05:37.839955 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.840426 kubelet[3480]: W1108 00:05:37.839990 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.840426 kubelet[3480]: E1108 00:05:37.840117 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.841427 kubelet[3480]: E1108 00:05:37.841383 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.841427 kubelet[3480]: W1108 00:05:37.841422 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.841601 kubelet[3480]: E1108 00:05:37.841455 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.843160 kubelet[3480]: E1108 00:05:37.843101 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.843160 kubelet[3480]: W1108 00:05:37.843139 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.843508 kubelet[3480]: E1108 00:05:37.843174 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.843762 kubelet[3480]: E1108 00:05:37.843726 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.843762 kubelet[3480]: W1108 00:05:37.843756 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.843945 kubelet[3480]: E1108 00:05:37.843783 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.844615 kubelet[3480]: E1108 00:05:37.844571 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.844731 kubelet[3480]: W1108 00:05:37.844603 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.844731 kubelet[3480]: E1108 00:05:37.844659 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.845311 kubelet[3480]: E1108 00:05:37.845273 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.845311 kubelet[3480]: W1108 00:05:37.845303 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.847122 kubelet[3480]: E1108 00:05:37.845329 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.847398 kubelet[3480]: E1108 00:05:37.847344 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.847398 kubelet[3480]: W1108 00:05:37.847380 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.847591 kubelet[3480]: E1108 00:05:37.847413 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.848101 kubelet[3480]: E1108 00:05:37.847997 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.848238 kubelet[3480]: W1108 00:05:37.848157 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.848557 kubelet[3480]: E1108 00:05:37.848239 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.848927 kubelet[3480]: E1108 00:05:37.848888 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.848927 kubelet[3480]: W1108 00:05:37.848920 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.849098 kubelet[3480]: E1108 00:05:37.848950 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.849454 kubelet[3480]: E1108 00:05:37.849420 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.849454 kubelet[3480]: W1108 00:05:37.849447 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.849613 kubelet[3480]: E1108 00:05:37.849473 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.849991 kubelet[3480]: E1108 00:05:37.849954 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.849991 kubelet[3480]: W1108 00:05:37.849984 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.850164 kubelet[3480]: E1108 00:05:37.850045 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.850912 kubelet[3480]: E1108 00:05:37.850865 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.850912 kubelet[3480]: W1108 00:05:37.850901 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.851115 kubelet[3480]: E1108 00:05:37.850929 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.853927 containerd[2017]: time="2025-11-08T00:05:37.853798477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7nrhr,Uid:4a431b9c-fa06-48d8-838d-9c2137687c30,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:37.875699 kubelet[3480]: E1108 00:05:37.874375 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:37.875699 kubelet[3480]: W1108 00:05:37.874419 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:37.875699 kubelet[3480]: E1108 00:05:37.874455 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:37.909539 containerd[2017]: time="2025-11-08T00:05:37.908167525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:05:37.909539 containerd[2017]: time="2025-11-08T00:05:37.908291761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:05:37.909539 containerd[2017]: time="2025-11-08T00:05:37.908330533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:37.909871 containerd[2017]: time="2025-11-08T00:05:37.908763685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:05:37.960343 systemd[1]: Started cri-containerd-373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114.scope - libcontainer container 373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114. Nov 8 00:05:37.969989 containerd[2017]: time="2025-11-08T00:05:37.969523453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-548df5774-9gdmr,Uid:10fca90c-bce8-46fa-b9c3-56fff8824398,Namespace:calico-system,Attempt:0,} returns sandbox id \"331478785150a9bcf4b1f77a5811faf0d8509ecc428834f573402715770fa538\"" Nov 8 00:05:37.974154 containerd[2017]: time="2025-11-08T00:05:37.973965601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:05:38.015970 containerd[2017]: time="2025-11-08T00:05:38.015904966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7nrhr,Uid:4a431b9c-fa06-48d8-838d-9c2137687c30,Namespace:calico-system,Attempt:0,} returns sandbox id \"373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114\"" Nov 8 00:05:38.860389 kubelet[3480]: E1108 00:05:38.859656 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:39.550766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650844384.mount: Deactivated successfully. Nov 8 00:05:40.439153 containerd[2017]: time="2025-11-08T00:05:40.439070630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:40.440951 containerd[2017]: time="2025-11-08T00:05:40.440620766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 8 00:05:40.443122 containerd[2017]: time="2025-11-08T00:05:40.442829462Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:40.449705 containerd[2017]: time="2025-11-08T00:05:40.449636054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:40.451694 containerd[2017]: time="2025-11-08T00:05:40.451269254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.477179441s" Nov 8 00:05:40.451694 containerd[2017]: time="2025-11-08T00:05:40.451328714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 8 00:05:40.454658 containerd[2017]: time="2025-11-08T00:05:40.454403678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:05:40.490227 containerd[2017]: time="2025-11-08T00:05:40.490165730Z" level=info msg="CreateContainer within sandbox \"331478785150a9bcf4b1f77a5811faf0d8509ecc428834f573402715770fa538\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:05:40.520707 containerd[2017]: time="2025-11-08T00:05:40.520630898Z" level=info msg="CreateContainer within sandbox \"331478785150a9bcf4b1f77a5811faf0d8509ecc428834f573402715770fa538\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c41dcd691ccbce896e785990b3476d400ec3502528f5fba4979fc73bd5fa09f\"" Nov 8 00:05:40.522096 containerd[2017]: time="2025-11-08T00:05:40.521570474Z" level=info msg="StartContainer for \"1c41dcd691ccbce896e785990b3476d400ec3502528f5fba4979fc73bd5fa09f\"" Nov 8 00:05:40.586378 systemd[1]: Started cri-containerd-1c41dcd691ccbce896e785990b3476d400ec3502528f5fba4979fc73bd5fa09f.scope - libcontainer container 1c41dcd691ccbce896e785990b3476d400ec3502528f5fba4979fc73bd5fa09f. Nov 8 00:05:40.655325 containerd[2017]: time="2025-11-08T00:05:40.655241151Z" level=info msg="StartContainer for \"1c41dcd691ccbce896e785990b3476d400ec3502528f5fba4979fc73bd5fa09f\" returns successfully" Nov 8 00:05:40.859774 kubelet[3480]: E1108 00:05:40.859693 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:41.117868 kubelet[3480]: E1108 00:05:41.117719 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.117868 kubelet[3480]: W1108 00:05:41.117766 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.117868 kubelet[3480]: E1108 00:05:41.117804 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.118910 kubelet[3480]: E1108 00:05:41.118867 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.119037 kubelet[3480]: W1108 00:05:41.118904 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.119037 kubelet[3480]: E1108 00:05:41.118977 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.119709 kubelet[3480]: E1108 00:05:41.119644 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.119709 kubelet[3480]: W1108 00:05:41.119680 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.119709 kubelet[3480]: E1108 00:05:41.119711 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.120629 kubelet[3480]: E1108 00:05:41.120543 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.120629 kubelet[3480]: W1108 00:05:41.120588 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.120629 kubelet[3480]: E1108 00:05:41.120620 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.121645 kubelet[3480]: E1108 00:05:41.121449 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.121725 kubelet[3480]: W1108 00:05:41.121643 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.121725 kubelet[3480]: E1108 00:05:41.121682 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.124449 kubelet[3480]: E1108 00:05:41.123539 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.124449 kubelet[3480]: W1108 00:05:41.123578 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.124449 kubelet[3480]: E1108 00:05:41.123621 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.127035 kubelet[3480]: E1108 00:05:41.125195 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.127035 kubelet[3480]: W1108 00:05:41.125233 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.127035 kubelet[3480]: E1108 00:05:41.125263 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.127035 kubelet[3480]: E1108 00:05:41.126325 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.127035 kubelet[3480]: W1108 00:05:41.126352 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.127035 kubelet[3480]: E1108 00:05:41.126383 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.127545 kubelet[3480]: E1108 00:05:41.127304 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.127545 kubelet[3480]: W1108 00:05:41.127403 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.127545 kubelet[3480]: E1108 00:05:41.127435 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.128428 kubelet[3480]: E1108 00:05:41.128180 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.128428 kubelet[3480]: W1108 00:05:41.128215 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.128428 kubelet[3480]: E1108 00:05:41.128243 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.129396 kubelet[3480]: E1108 00:05:41.129347 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.129396 kubelet[3480]: W1108 00:05:41.129384 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.129643 kubelet[3480]: E1108 00:05:41.129418 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.129866 kubelet[3480]: E1108 00:05:41.129817 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.129866 kubelet[3480]: W1108 00:05:41.129848 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.131326 kubelet[3480]: E1108 00:05:41.129874 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.131656 kubelet[3480]: E1108 00:05:41.131612 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.131656 kubelet[3480]: W1108 00:05:41.131650 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.131813 kubelet[3480]: E1108 00:05:41.131684 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.133409 kubelet[3480]: E1108 00:05:41.133358 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.133409 kubelet[3480]: W1108 00:05:41.133397 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.133655 kubelet[3480]: E1108 00:05:41.133432 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.134072 kubelet[3480]: E1108 00:05:41.133839 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.134072 kubelet[3480]: W1108 00:05:41.133868 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.134072 kubelet[3480]: E1108 00:05:41.133895 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.144429 kubelet[3480]: I1108 00:05:41.144165 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-548df5774-9gdmr" podStartSLOduration=1.66463782 podStartE2EDuration="4.144143893s" podCreationTimestamp="2025-11-08 00:05:37 +0000 UTC" firstStartedPulling="2025-11-08 00:05:37.973489513 +0000 UTC m=+42.397179511" lastFinishedPulling="2025-11-08 00:05:40.452995586 +0000 UTC m=+44.876685584" observedRunningTime="2025-11-08 00:05:41.116275069 +0000 UTC m=+45.539965079" watchObservedRunningTime="2025-11-08 00:05:41.144143893 +0000 UTC m=+45.567833915" Nov 8 00:05:41.160437 kubelet[3480]: E1108 00:05:41.160371 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.160437 kubelet[3480]: W1108 00:05:41.160414 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.161205 kubelet[3480]: E1108 00:05:41.160450 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.162067 kubelet[3480]: E1108 00:05:41.161904 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.162067 kubelet[3480]: W1108 00:05:41.162059 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.163069 kubelet[3480]: E1108 00:05:41.162330 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.163695 kubelet[3480]: E1108 00:05:41.163656 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.164340 kubelet[3480]: W1108 00:05:41.164202 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.164340 kubelet[3480]: E1108 00:05:41.164274 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.165196 kubelet[3480]: E1108 00:05:41.165152 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.165196 kubelet[3480]: W1108 00:05:41.165188 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.165905 kubelet[3480]: E1108 00:05:41.165222 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.166782 kubelet[3480]: E1108 00:05:41.166436 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.166782 kubelet[3480]: W1108 00:05:41.166469 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.166782 kubelet[3480]: E1108 00:05:41.166615 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.168115 kubelet[3480]: E1108 00:05:41.167706 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.168115 kubelet[3480]: W1108 00:05:41.167738 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.168115 kubelet[3480]: E1108 00:05:41.167770 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.169914 kubelet[3480]: E1108 00:05:41.169449 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.169914 kubelet[3480]: W1108 00:05:41.169482 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.169914 kubelet[3480]: E1108 00:05:41.169514 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.171821 kubelet[3480]: E1108 00:05:41.171179 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.171821 kubelet[3480]: W1108 00:05:41.171207 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.171821 kubelet[3480]: E1108 00:05:41.171523 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.174776 kubelet[3480]: E1108 00:05:41.174533 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.174776 kubelet[3480]: W1108 00:05:41.174598 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.174776 kubelet[3480]: E1108 00:05:41.174633 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.176251 kubelet[3480]: E1108 00:05:41.175528 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.176251 kubelet[3480]: W1108 00:05:41.175582 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.176251 kubelet[3480]: E1108 00:05:41.175617 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.177711 kubelet[3480]: E1108 00:05:41.176897 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.177711 kubelet[3480]: W1108 00:05:41.177069 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.177711 kubelet[3480]: E1108 00:05:41.177104 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.178688 kubelet[3480]: E1108 00:05:41.178650 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.179082 kubelet[3480]: W1108 00:05:41.178837 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.179497 kubelet[3480]: E1108 00:05:41.178878 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.181682 kubelet[3480]: E1108 00:05:41.181240 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.181682 kubelet[3480]: W1108 00:05:41.181275 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.181682 kubelet[3480]: E1108 00:05:41.181309 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.183110 kubelet[3480]: E1108 00:05:41.183061 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.183110 kubelet[3480]: W1108 00:05:41.183098 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.183357 kubelet[3480]: E1108 00:05:41.183130 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.184316 kubelet[3480]: E1108 00:05:41.184232 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.184316 kubelet[3480]: W1108 00:05:41.184306 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.184663 kubelet[3480]: E1108 00:05:41.184523 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.185597 kubelet[3480]: E1108 00:05:41.185540 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.185597 kubelet[3480]: W1108 00:05:41.185599 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.185813 kubelet[3480]: E1108 00:05:41.185648 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.187111 kubelet[3480]: E1108 00:05:41.186506 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.187111 kubelet[3480]: W1108 00:05:41.186564 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.187111 kubelet[3480]: E1108 00:05:41.186607 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:41.188538 kubelet[3480]: E1108 00:05:41.188468 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:41.188538 kubelet[3480]: W1108 00:05:41.188511 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:41.188538 kubelet[3480]: E1108 00:05:41.188545 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.046084 containerd[2017]: time="2025-11-08T00:05:42.045382598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:42.048225 containerd[2017]: time="2025-11-08T00:05:42.047850086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 8 00:05:42.050295 containerd[2017]: time="2025-11-08T00:05:42.050238134Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:42.055113 containerd[2017]: time="2025-11-08T00:05:42.055028174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:42.057092 containerd[2017]: time="2025-11-08T00:05:42.056430422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.60196114s" Nov 8 00:05:42.057092 containerd[2017]: time="2025-11-08T00:05:42.056491970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 8 00:05:42.066913 containerd[2017]: time="2025-11-08T00:05:42.066856790Z" level=info msg="CreateContainer within sandbox \"373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:05:42.098825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2563299100.mount: Deactivated successfully. Nov 8 00:05:42.100215 containerd[2017]: time="2025-11-08T00:05:42.100143578Z" level=info msg="CreateContainer within sandbox \"373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02\"" Nov 8 00:05:42.103299 containerd[2017]: time="2025-11-08T00:05:42.103216898Z" level=info msg="StartContainer for \"d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02\"" Nov 8 00:05:42.143724 kubelet[3480]: E1108 00:05:42.143385 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.143724 kubelet[3480]: W1108 00:05:42.143443 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.143724 kubelet[3480]: E1108 00:05:42.143481 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.145604 kubelet[3480]: E1108 00:05:42.144475 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.145604 kubelet[3480]: W1108 00:05:42.144502 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.145604 kubelet[3480]: E1108 00:05:42.144534 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.145604 kubelet[3480]: E1108 00:05:42.145009 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.145604 kubelet[3480]: W1108 00:05:42.145079 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.145604 kubelet[3480]: E1108 00:05:42.145151 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.146535 kubelet[3480]: E1108 00:05:42.146054 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.146535 kubelet[3480]: W1108 00:05:42.146085 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.146535 kubelet[3480]: E1108 00:05:42.146206 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.147228 kubelet[3480]: E1108 00:05:42.146921 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.147228 kubelet[3480]: W1108 00:05:42.146949 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.147228 kubelet[3480]: E1108 00:05:42.146980 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.148222 kubelet[3480]: E1108 00:05:42.147882 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.148222 kubelet[3480]: W1108 00:05:42.147909 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.148222 kubelet[3480]: E1108 00:05:42.147964 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.153055 kubelet[3480]: E1108 00:05:42.150101 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.153055 kubelet[3480]: W1108 00:05:42.150132 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.153055 kubelet[3480]: E1108 00:05:42.150164 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.154438 kubelet[3480]: E1108 00:05:42.154351 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.157925 kubelet[3480]: W1108 00:05:42.156843 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.157925 kubelet[3480]: E1108 00:05:42.156898 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.161079 kubelet[3480]: E1108 00:05:42.160252 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.161079 kubelet[3480]: W1108 00:05:42.160289 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.161079 kubelet[3480]: E1108 00:05:42.160325 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.162917 kubelet[3480]: E1108 00:05:42.162725 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.162917 kubelet[3480]: W1108 00:05:42.162758 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.162917 kubelet[3480]: E1108 00:05:42.162792 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.164850 kubelet[3480]: E1108 00:05:42.164316 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.164850 kubelet[3480]: W1108 00:05:42.164346 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.164850 kubelet[3480]: E1108 00:05:42.164378 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.165615 kubelet[3480]: E1108 00:05:42.165453 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.165615 kubelet[3480]: W1108 00:05:42.165480 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.165615 kubelet[3480]: E1108 00:05:42.165510 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.169098 kubelet[3480]: E1108 00:05:42.168288 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.169098 kubelet[3480]: W1108 00:05:42.168333 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.169098 kubelet[3480]: E1108 00:05:42.168367 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.169997 kubelet[3480]: E1108 00:05:42.169401 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.169997 kubelet[3480]: W1108 00:05:42.169430 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.169997 kubelet[3480]: E1108 00:05:42.169463 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.171195 kubelet[3480]: E1108 00:05:42.171164 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.171715 kubelet[3480]: W1108 00:05:42.171308 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.171715 kubelet[3480]: E1108 00:05:42.171349 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.173508 kubelet[3480]: E1108 00:05:42.173261 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.173508 kubelet[3480]: W1108 00:05:42.173290 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.173508 kubelet[3480]: E1108 00:05:42.173319 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.173879 kubelet[3480]: E1108 00:05:42.173857 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.173990 kubelet[3480]: W1108 00:05:42.173966 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.174313 kubelet[3480]: E1108 00:05:42.174116 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.174718 kubelet[3480]: E1108 00:05:42.174490 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.174718 kubelet[3480]: W1108 00:05:42.174510 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.174718 kubelet[3480]: E1108 00:05:42.174531 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.174985 kubelet[3480]: E1108 00:05:42.174966 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.175131 kubelet[3480]: W1108 00:05:42.175109 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.175247 kubelet[3480]: E1108 00:05:42.175226 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.175844 kubelet[3480]: E1108 00:05:42.175676 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.175844 kubelet[3480]: W1108 00:05:42.175696 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.175844 kubelet[3480]: E1108 00:05:42.175718 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.176195 kubelet[3480]: E1108 00:05:42.176172 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.176318 systemd[1]: Started cri-containerd-d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02.scope - libcontainer container d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02. Nov 8 00:05:42.178093 kubelet[3480]: W1108 00:05:42.176904 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.178093 kubelet[3480]: E1108 00:05:42.176949 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.180166 kubelet[3480]: E1108 00:05:42.180113 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.180166 kubelet[3480]: W1108 00:05:42.180152 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.180368 kubelet[3480]: E1108 00:05:42.180186 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.184596 kubelet[3480]: E1108 00:05:42.184542 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.184596 kubelet[3480]: W1108 00:05:42.184579 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.184807 kubelet[3480]: E1108 00:05:42.184614 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.185898 kubelet[3480]: E1108 00:05:42.185825 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.185898 kubelet[3480]: W1108 00:05:42.185886 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.186276 kubelet[3480]: E1108 00:05:42.185957 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.188082 kubelet[3480]: E1108 00:05:42.186625 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.188082 kubelet[3480]: W1108 00:05:42.186657 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.188082 kubelet[3480]: E1108 00:05:42.186683 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.188082 kubelet[3480]: E1108 00:05:42.187187 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.188082 kubelet[3480]: W1108 00:05:42.187245 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.188082 kubelet[3480]: E1108 00:05:42.187273 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.188082 kubelet[3480]: E1108 00:05:42.187744 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.188082 kubelet[3480]: W1108 00:05:42.187795 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.188082 kubelet[3480]: E1108 00:05:42.187821 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.188709 kubelet[3480]: E1108 00:05:42.188467 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.188709 kubelet[3480]: W1108 00:05:42.188522 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.188709 kubelet[3480]: E1108 00:05:42.188549 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.190877 kubelet[3480]: E1108 00:05:42.189677 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.190877 kubelet[3480]: W1108 00:05:42.189721 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.190877 kubelet[3480]: E1108 00:05:42.189772 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.190877 kubelet[3480]: E1108 00:05:42.190272 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.190877 kubelet[3480]: W1108 00:05:42.190292 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.190877 kubelet[3480]: E1108 00:05:42.190316 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.191324 kubelet[3480]: E1108 00:05:42.191208 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.191324 kubelet[3480]: W1108 00:05:42.191231 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.191324 kubelet[3480]: E1108 00:05:42.191257 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.192262 kubelet[3480]: E1108 00:05:42.192217 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.192262 kubelet[3480]: W1108 00:05:42.192253 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.192683 kubelet[3480]: E1108 00:05:42.192305 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.192893 kubelet[3480]: E1108 00:05:42.192856 3480 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:05:42.192980 kubelet[3480]: W1108 00:05:42.192885 3480 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:05:42.192980 kubelet[3480]: E1108 00:05:42.192936 3480 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:05:42.235918 containerd[2017]: time="2025-11-08T00:05:42.235751690Z" level=info msg="StartContainer for \"d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02\" returns successfully" Nov 8 00:05:42.265798 systemd[1]: cri-containerd-d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02.scope: Deactivated successfully. Nov 8 00:05:42.448814 containerd[2017]: time="2025-11-08T00:05:42.448700656Z" level=info msg="shim disconnected" id=d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02 namespace=k8s.io Nov 8 00:05:42.449407 containerd[2017]: time="2025-11-08T00:05:42.449348248Z" level=warning msg="cleaning up after shim disconnected" id=d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02 namespace=k8s.io Nov 8 00:05:42.449555 containerd[2017]: time="2025-11-08T00:05:42.449526988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:05:42.860076 kubelet[3480]: E1108 00:05:42.859533 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:43.087163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d50a610129f5fae761882e780786c9f1f353d9699ff8ac0307932ebc19b45e02-rootfs.mount: Deactivated successfully. Nov 8 00:05:43.103057 containerd[2017]: time="2025-11-08T00:05:43.102633399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:05:44.859380 kubelet[3480]: E1108 00:05:44.858868 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:46.860044 kubelet[3480]: E1108 00:05:46.859720 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:47.082425 containerd[2017]: time="2025-11-08T00:05:47.082356727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:47.084502 containerd[2017]: time="2025-11-08T00:05:47.084443275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 8 00:05:47.087408 containerd[2017]: time="2025-11-08T00:05:47.087215719Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:47.095076 containerd[2017]: time="2025-11-08T00:05:47.094200559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:47.096889 containerd[2017]: time="2025-11-08T00:05:47.095731183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.993036224s" Nov 8 00:05:47.096889 containerd[2017]: time="2025-11-08T00:05:47.095791771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 8 00:05:47.106039 containerd[2017]: time="2025-11-08T00:05:47.105963019Z" level=info msg="CreateContainer within sandbox \"373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:05:47.140655 containerd[2017]: time="2025-11-08T00:05:47.140359615Z" level=info msg="CreateContainer within sandbox \"373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f\"" Nov 8 00:05:47.143453 containerd[2017]: time="2025-11-08T00:05:47.143247271Z" level=info msg="StartContainer for \"e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f\"" Nov 8 00:05:47.208401 systemd[1]: Started cri-containerd-e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f.scope - libcontainer container e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f. Nov 8 00:05:47.264216 containerd[2017]: time="2025-11-08T00:05:47.264000091Z" level=info msg="StartContainer for \"e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f\" returns successfully" Nov 8 00:05:48.385644 containerd[2017]: time="2025-11-08T00:05:48.385534221Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:05:48.392827 systemd[1]: cri-containerd-e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f.scope: Deactivated successfully. Nov 8 00:05:48.438976 kubelet[3480]: I1108 00:05:48.438933 3480 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:05:48.460713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f-rootfs.mount: Deactivated successfully. Nov 8 00:05:48.591933 systemd[1]: Created slice kubepods-besteffort-podea0c6231_9031_4a39_a396_da5bd45cdc1d.slice - libcontainer container kubepods-besteffort-podea0c6231_9031_4a39_a396_da5bd45cdc1d.slice. Nov 8 00:05:48.622151 systemd[1]: Created slice kubepods-besteffort-pod33ee737d_9bb0_44ae_abd4_ed2fcc115154.slice - libcontainer container kubepods-besteffort-pod33ee737d_9bb0_44ae_abd4_ed2fcc115154.slice. Nov 8 00:05:48.668274 kubelet[3480]: I1108 00:05:48.627010 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-backend-key-pair\") pod \"whisker-58c68b6689-zrwvp\" (UID: \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\") " pod="calico-system/whisker-58c68b6689-zrwvp" Nov 8 00:05:48.668274 kubelet[3480]: I1108 00:05:48.627110 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-ca-bundle\") pod \"whisker-58c68b6689-zrwvp\" (UID: \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\") " pod="calico-system/whisker-58c68b6689-zrwvp" Nov 8 00:05:48.668274 kubelet[3480]: I1108 00:05:48.627187 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33ee737d-9bb0-44ae-abd4-ed2fcc115154-tigera-ca-bundle\") pod \"calico-kube-controllers-6cf595bbd4-gjrhj\" (UID: \"33ee737d-9bb0-44ae-abd4-ed2fcc115154\") " pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" Nov 8 00:05:48.668274 kubelet[3480]: I1108 00:05:48.627234 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp227\" (UniqueName: \"kubernetes.io/projected/ea0c6231-9031-4a39-a396-da5bd45cdc1d-kube-api-access-lp227\") pod \"whisker-58c68b6689-zrwvp\" (UID: \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\") " pod="calico-system/whisker-58c68b6689-zrwvp" Nov 8 00:05:48.668274 kubelet[3480]: I1108 00:05:48.627272 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vlqf\" (UniqueName: \"kubernetes.io/projected/33ee737d-9bb0-44ae-abd4-ed2fcc115154-kube-api-access-9vlqf\") pod \"calico-kube-controllers-6cf595bbd4-gjrhj\" (UID: \"33ee737d-9bb0-44ae-abd4-ed2fcc115154\") " pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" Nov 8 00:05:48.693490 systemd[1]: Created slice kubepods-burstable-pod049aea22_2859_4d2c_978e_0ff4ef7d540d.slice - libcontainer container kubepods-burstable-pod049aea22_2859_4d2c_978e_0ff4ef7d540d.slice. Nov 8 00:05:48.736856 kubelet[3480]: I1108 00:05:48.728270 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbk22\" (UniqueName: \"kubernetes.io/projected/049aea22-2859-4d2c-978e-0ff4ef7d540d-kube-api-access-fbk22\") pod \"coredns-66bc5c9577-b447x\" (UID: \"049aea22-2859-4d2c-978e-0ff4ef7d540d\") " pod="kube-system/coredns-66bc5c9577-b447x" Nov 8 00:05:48.736856 kubelet[3480]: I1108 00:05:48.728413 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/049aea22-2859-4d2c-978e-0ff4ef7d540d-config-volume\") pod \"coredns-66bc5c9577-b447x\" (UID: \"049aea22-2859-4d2c-978e-0ff4ef7d540d\") " pod="kube-system/coredns-66bc5c9577-b447x" Nov 8 00:05:48.752770 systemd[1]: Created slice kubepods-besteffort-poda0f0a15a_d068_42d7_9057_db8aa3861ce8.slice - libcontainer container kubepods-besteffort-poda0f0a15a_d068_42d7_9057_db8aa3861ce8.slice. Nov 8 00:05:48.833068 kubelet[3480]: I1108 00:05:48.831147 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a0f0a15a-d068-42d7-9057-db8aa3861ce8-calico-apiserver-certs\") pod \"calico-apiserver-f558bfb5c-2hjdp\" (UID: \"a0f0a15a-d068-42d7-9057-db8aa3861ce8\") " pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" Nov 8 00:05:48.833068 kubelet[3480]: I1108 00:05:48.831255 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh9jk\" (UniqueName: \"kubernetes.io/projected/a0f0a15a-d068-42d7-9057-db8aa3861ce8-kube-api-access-fh9jk\") pod \"calico-apiserver-f558bfb5c-2hjdp\" (UID: \"a0f0a15a-d068-42d7-9057-db8aa3861ce8\") " pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" Nov 8 00:05:48.833068 kubelet[3480]: I1108 00:05:48.831295 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec16be48-232b-457b-bf3a-4db776262475-calico-apiserver-certs\") pod \"calico-apiserver-f558bfb5c-cbxnw\" (UID: \"ec16be48-232b-457b-bf3a-4db776262475\") " pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" Nov 8 00:05:48.833068 kubelet[3480]: I1108 00:05:48.831361 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzd7l\" (UniqueName: \"kubernetes.io/projected/ec16be48-232b-457b-bf3a-4db776262475-kube-api-access-kzd7l\") pod \"calico-apiserver-f558bfb5c-cbxnw\" (UID: \"ec16be48-232b-457b-bf3a-4db776262475\") " pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" Nov 8 00:05:48.861434 systemd[1]: Created slice kubepods-besteffort-podec16be48_232b_457b_bf3a_4db776262475.slice - libcontainer container kubepods-besteffort-podec16be48_232b_457b_bf3a_4db776262475.slice. Nov 8 00:05:48.916554 containerd[2017]: time="2025-11-08T00:05:48.916279356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c68b6689-zrwvp,Uid:ea0c6231-9031-4a39-a396-da5bd45cdc1d,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:48.920626 containerd[2017]: time="2025-11-08T00:05:48.919197348Z" level=info msg="shim disconnected" id=e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f namespace=k8s.io Nov 8 00:05:48.920626 containerd[2017]: time="2025-11-08T00:05:48.920010252Z" level=warning msg="cleaning up after shim disconnected" id=e520afc6483603b17f053fa2252a4bd838dd7277e8d0fccf5a45d5f49935ec2f namespace=k8s.io Nov 8 00:05:48.920626 containerd[2017]: time="2025-11-08T00:05:48.920522400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:05:48.936540 systemd[1]: Created slice kubepods-besteffort-pod105dae3d_b44c_41c4_b31a_bd1432c68a75.slice - libcontainer container kubepods-besteffort-pod105dae3d_b44c_41c4_b31a_bd1432c68a75.slice. Nov 8 00:05:48.990785 systemd[1]: Created slice kubepods-besteffort-pod415c772f_4a8a_4df0_8713_cab5820f0205.slice - libcontainer container kubepods-besteffort-pod415c772f_4a8a_4df0_8713_cab5820f0205.slice. Nov 8 00:05:49.003645 containerd[2017]: time="2025-11-08T00:05:49.003469964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf595bbd4-gjrhj,Uid:33ee737d-9bb0-44ae-abd4-ed2fcc115154,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:49.005173 containerd[2017]: time="2025-11-08T00:05:49.004841720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkzrr,Uid:105dae3d-b44c-41c4-b31a-bd1432c68a75,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:49.033825 kubelet[3480]: I1108 00:05:49.033513 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwsv4\" (UniqueName: \"kubernetes.io/projected/415c772f-4a8a-4df0-8713-cab5820f0205-kube-api-access-hwsv4\") pod \"goldmane-7c778bb748-rdhgb\" (UID: \"415c772f-4a8a-4df0-8713-cab5820f0205\") " pod="calico-system/goldmane-7c778bb748-rdhgb" Nov 8 00:05:49.033825 kubelet[3480]: I1108 00:05:49.033613 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/415c772f-4a8a-4df0-8713-cab5820f0205-goldmane-key-pair\") pod \"goldmane-7c778bb748-rdhgb\" (UID: \"415c772f-4a8a-4df0-8713-cab5820f0205\") " pod="calico-system/goldmane-7c778bb748-rdhgb" Nov 8 00:05:49.033825 kubelet[3480]: I1108 00:05:49.033664 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/415c772f-4a8a-4df0-8713-cab5820f0205-config\") pod \"goldmane-7c778bb748-rdhgb\" (UID: \"415c772f-4a8a-4df0-8713-cab5820f0205\") " pod="calico-system/goldmane-7c778bb748-rdhgb" Nov 8 00:05:49.033825 kubelet[3480]: I1108 00:05:49.033706 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbwt\" (UniqueName: \"kubernetes.io/projected/e6ec260d-5b9d-4d44-82ff-ca1893bb4d69-kube-api-access-dzbwt\") pod \"coredns-66bc5c9577-kvcmr\" (UID: \"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69\") " pod="kube-system/coredns-66bc5c9577-kvcmr" Nov 8 00:05:49.033825 kubelet[3480]: I1108 00:05:49.033754 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6ec260d-5b9d-4d44-82ff-ca1893bb4d69-config-volume\") pod \"coredns-66bc5c9577-kvcmr\" (UID: \"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69\") " pod="kube-system/coredns-66bc5c9577-kvcmr" Nov 8 00:05:49.034594 kubelet[3480]: I1108 00:05:49.033796 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415c772f-4a8a-4df0-8713-cab5820f0205-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-rdhgb\" (UID: \"415c772f-4a8a-4df0-8713-cab5820f0205\") " pod="calico-system/goldmane-7c778bb748-rdhgb" Nov 8 00:05:49.048002 containerd[2017]: time="2025-11-08T00:05:49.047694476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b447x,Uid:049aea22-2859-4d2c-978e-0ff4ef7d540d,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:49.047825 systemd[1]: Created slice kubepods-burstable-pode6ec260d_5b9d_4d44_82ff_ca1893bb4d69.slice - libcontainer container kubepods-burstable-pode6ec260d_5b9d_4d44_82ff_ca1893bb4d69.slice. Nov 8 00:05:49.062221 containerd[2017]: time="2025-11-08T00:05:49.061719884Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:05:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:05:49.085927 containerd[2017]: time="2025-11-08T00:05:49.085869321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-2hjdp,Uid:a0f0a15a-d068-42d7-9057-db8aa3861ce8,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:05:49.191246 containerd[2017]: time="2025-11-08T00:05:49.190139829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-cbxnw,Uid:ec16be48-232b-457b-bf3a-4db776262475,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:05:49.214423 containerd[2017]: time="2025-11-08T00:05:49.213507153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:05:49.342350 containerd[2017]: time="2025-11-08T00:05:49.342286954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rdhgb,Uid:415c772f-4a8a-4df0-8713-cab5820f0205,Namespace:calico-system,Attempt:0,}" Nov 8 00:05:49.366788 containerd[2017]: time="2025-11-08T00:05:49.366716986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvcmr,Uid:e6ec260d-5b9d-4d44-82ff-ca1893bb4d69,Namespace:kube-system,Attempt:0,}" Nov 8 00:05:49.583526 containerd[2017]: time="2025-11-08T00:05:49.583430903Z" level=error msg="Failed to destroy network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.591795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278-shm.mount: Deactivated successfully. Nov 8 00:05:49.593351 containerd[2017]: time="2025-11-08T00:05:49.591753995Z" level=error msg="encountered an error cleaning up failed sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.593351 containerd[2017]: time="2025-11-08T00:05:49.591869819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58c68b6689-zrwvp,Uid:ea0c6231-9031-4a39-a396-da5bd45cdc1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.594993 kubelet[3480]: E1108 00:05:49.593580 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.594993 kubelet[3480]: E1108 00:05:49.593682 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58c68b6689-zrwvp" Nov 8 00:05:49.594993 kubelet[3480]: E1108 00:05:49.593718 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58c68b6689-zrwvp" Nov 8 00:05:49.599307 kubelet[3480]: E1108 00:05:49.593799 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58c68b6689-zrwvp_calico-system(ea0c6231-9031-4a39-a396-da5bd45cdc1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58c68b6689-zrwvp_calico-system(ea0c6231-9031-4a39-a396-da5bd45cdc1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58c68b6689-zrwvp" podUID="ea0c6231-9031-4a39-a396-da5bd45cdc1d" Nov 8 00:05:49.606044 containerd[2017]: time="2025-11-08T00:05:49.605818403Z" level=error msg="Failed to destroy network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.613703 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac-shm.mount: Deactivated successfully. Nov 8 00:05:49.617701 containerd[2017]: time="2025-11-08T00:05:49.617624291Z" level=error msg="encountered an error cleaning up failed sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.617946 containerd[2017]: time="2025-11-08T00:05:49.617731763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkzrr,Uid:105dae3d-b44c-41c4-b31a-bd1432c68a75,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.619157 kubelet[3480]: E1108 00:05:49.618802 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.619157 kubelet[3480]: E1108 00:05:49.618890 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rkzrr" Nov 8 00:05:49.619157 kubelet[3480]: E1108 00:05:49.618923 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rkzrr" Nov 8 00:05:49.619428 kubelet[3480]: E1108 00:05:49.619006 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:49.650083 containerd[2017]: time="2025-11-08T00:05:49.649118867Z" level=error msg="Failed to destroy network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.651372 containerd[2017]: time="2025-11-08T00:05:49.651313427Z" level=error msg="Failed to destroy network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.654373 containerd[2017]: time="2025-11-08T00:05:49.654294395Z" level=error msg="encountered an error cleaning up failed sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.657648 containerd[2017]: time="2025-11-08T00:05:49.656006003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf595bbd4-gjrhj,Uid:33ee737d-9bb0-44ae-abd4-ed2fcc115154,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.658732 kubelet[3480]: E1108 00:05:49.658597 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.659215 kubelet[3480]: E1108 00:05:49.658699 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" Nov 8 00:05:49.659215 kubelet[3480]: E1108 00:05:49.658981 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" Nov 8 00:05:49.660422 containerd[2017]: time="2025-11-08T00:05:49.659513879Z" level=error msg="encountered an error cleaning up failed sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.660422 containerd[2017]: time="2025-11-08T00:05:49.659608091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-2hjdp,Uid:a0f0a15a-d068-42d7-9057-db8aa3861ce8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.660312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36-shm.mount: Deactivated successfully. Nov 8 00:05:49.661255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92-shm.mount: Deactivated successfully. Nov 8 00:05:49.663831 kubelet[3480]: E1108 00:05:49.661179 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cf595bbd4-gjrhj_calico-system(33ee737d-9bb0-44ae-abd4-ed2fcc115154)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cf595bbd4-gjrhj_calico-system(33ee737d-9bb0-44ae-abd4-ed2fcc115154)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:05:49.663831 kubelet[3480]: E1108 00:05:49.663443 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.663831 kubelet[3480]: E1108 00:05:49.663538 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" Nov 8 00:05:49.665788 kubelet[3480]: E1108 00:05:49.664855 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" Nov 8 00:05:49.665788 kubelet[3480]: E1108 00:05:49.665163 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f558bfb5c-2hjdp_calico-apiserver(a0f0a15a-d068-42d7-9057-db8aa3861ce8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f558bfb5c-2hjdp_calico-apiserver(a0f0a15a-d068-42d7-9057-db8aa3861ce8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:05:49.710620 containerd[2017]: time="2025-11-08T00:05:49.710538828Z" level=error msg="Failed to destroy network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.712878 containerd[2017]: time="2025-11-08T00:05:49.712669584Z" level=error msg="encountered an error cleaning up failed sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.712878 containerd[2017]: time="2025-11-08T00:05:49.712770408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b447x,Uid:049aea22-2859-4d2c-978e-0ff4ef7d540d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.714433 kubelet[3480]: E1108 00:05:49.713190 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.714433 kubelet[3480]: E1108 00:05:49.713265 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b447x" Nov 8 00:05:49.714433 kubelet[3480]: E1108 00:05:49.713316 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b447x" Nov 8 00:05:49.714640 kubelet[3480]: E1108 00:05:49.713430 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-b447x_kube-system(049aea22-2859-4d2c-978e-0ff4ef7d540d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-b447x_kube-system(049aea22-2859-4d2c-978e-0ff4ef7d540d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-b447x" podUID="049aea22-2859-4d2c-978e-0ff4ef7d540d" Nov 8 00:05:49.731497 containerd[2017]: time="2025-11-08T00:05:49.731126376Z" level=error msg="Failed to destroy network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.732571 containerd[2017]: time="2025-11-08T00:05:49.732409728Z" level=error msg="encountered an error cleaning up failed sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.732896 containerd[2017]: time="2025-11-08T00:05:49.732634032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-cbxnw,Uid:ec16be48-232b-457b-bf3a-4db776262475,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.734218 kubelet[3480]: E1108 00:05:49.733410 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.734218 kubelet[3480]: E1108 00:05:49.733486 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" Nov 8 00:05:49.734218 kubelet[3480]: E1108 00:05:49.733537 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" Nov 8 00:05:49.734672 kubelet[3480]: E1108 00:05:49.733650 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f558bfb5c-cbxnw_calico-apiserver(ec16be48-232b-457b-bf3a-4db776262475)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f558bfb5c-cbxnw_calico-apiserver(ec16be48-232b-457b-bf3a-4db776262475)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:05:49.776556 containerd[2017]: time="2025-11-08T00:05:49.776270904Z" level=error msg="Failed to destroy network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.777585 containerd[2017]: time="2025-11-08T00:05:49.777473184Z" level=error msg="encountered an error cleaning up failed sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.777946 containerd[2017]: time="2025-11-08T00:05:49.777831048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rdhgb,Uid:415c772f-4a8a-4df0-8713-cab5820f0205,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.779162 kubelet[3480]: E1108 00:05:49.778426 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.779162 kubelet[3480]: E1108 00:05:49.778496 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rdhgb" Nov 8 00:05:49.779162 kubelet[3480]: E1108 00:05:49.778536 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rdhgb" Nov 8 00:05:49.779375 kubelet[3480]: E1108 00:05:49.778624 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-rdhgb_calico-system(415c772f-4a8a-4df0-8713-cab5820f0205)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-rdhgb_calico-system(415c772f-4a8a-4df0-8713-cab5820f0205)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:05:49.802960 containerd[2017]: time="2025-11-08T00:05:49.801428124Z" level=error msg="Failed to destroy network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.802960 containerd[2017]: time="2025-11-08T00:05:49.801983796Z" level=error msg="encountered an error cleaning up failed sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.802960 containerd[2017]: time="2025-11-08T00:05:49.802110420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvcmr,Uid:e6ec260d-5b9d-4d44-82ff-ca1893bb4d69,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.803340 kubelet[3480]: E1108 00:05:49.802408 3480 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:49.803340 kubelet[3480]: E1108 00:05:49.802478 3480 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kvcmr" Nov 8 00:05:49.803340 kubelet[3480]: E1108 00:05:49.802518 3480 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kvcmr" Nov 8 00:05:49.803531 kubelet[3480]: E1108 00:05:49.802607 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kvcmr_kube-system(e6ec260d-5b9d-4d44-82ff-ca1893bb4d69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kvcmr_kube-system(e6ec260d-5b9d-4d44-82ff-ca1893bb4d69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kvcmr" podUID="e6ec260d-5b9d-4d44-82ff-ca1893bb4d69" Nov 8 00:05:50.198689 kubelet[3480]: I1108 00:05:50.197699 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:05:50.200206 containerd[2017]: time="2025-11-08T00:05:50.199477258Z" level=info msg="StopPodSandbox for \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\"" Nov 8 00:05:50.200206 containerd[2017]: time="2025-11-08T00:05:50.199776214Z" level=info msg="Ensure that sandbox 6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278 in task-service has been cleanup successfully" Nov 8 00:05:50.205101 kubelet[3480]: I1108 00:05:50.203999 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:05:50.206428 containerd[2017]: time="2025-11-08T00:05:50.206380942Z" level=info msg="StopPodSandbox for \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\"" Nov 8 00:05:50.207352 containerd[2017]: time="2025-11-08T00:05:50.207294766Z" level=info msg="Ensure that sandbox eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34 in task-service has been cleanup successfully" Nov 8 00:05:50.210684 kubelet[3480]: I1108 00:05:50.210643 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:05:50.213952 containerd[2017]: time="2025-11-08T00:05:50.213579382Z" level=info msg="StopPodSandbox for \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\"" Nov 8 00:05:50.216602 containerd[2017]: time="2025-11-08T00:05:50.216519526Z" level=info msg="Ensure that sandbox c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92 in task-service has been cleanup successfully" Nov 8 00:05:50.220673 kubelet[3480]: I1108 00:05:50.220634 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:05:50.224721 containerd[2017]: time="2025-11-08T00:05:50.223523362Z" level=info msg="StopPodSandbox for \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\"" Nov 8 00:05:50.224721 containerd[2017]: time="2025-11-08T00:05:50.224225662Z" level=info msg="Ensure that sandbox 2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac in task-service has been cleanup successfully" Nov 8 00:05:50.234907 kubelet[3480]: I1108 00:05:50.234002 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:05:50.238042 containerd[2017]: time="2025-11-08T00:05:50.237882610Z" level=info msg="StopPodSandbox for \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\"" Nov 8 00:05:50.239416 containerd[2017]: time="2025-11-08T00:05:50.239257210Z" level=info msg="Ensure that sandbox c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304 in task-service has been cleanup successfully" Nov 8 00:05:50.249266 kubelet[3480]: I1108 00:05:50.248346 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:05:50.252326 containerd[2017]: time="2025-11-08T00:05:50.252253174Z" level=info msg="StopPodSandbox for \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\"" Nov 8 00:05:50.255622 containerd[2017]: time="2025-11-08T00:05:50.255454258Z" level=info msg="Ensure that sandbox 9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a in task-service has been cleanup successfully" Nov 8 00:05:50.269137 kubelet[3480]: I1108 00:05:50.269099 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:05:50.270071 containerd[2017]: time="2025-11-08T00:05:50.269934202Z" level=info msg="StopPodSandbox for \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\"" Nov 8 00:05:50.272526 containerd[2017]: time="2025-11-08T00:05:50.272175622Z" level=info msg="Ensure that sandbox ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36 in task-service has been cleanup successfully" Nov 8 00:05:50.290049 kubelet[3480]: I1108 00:05:50.288689 3480 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:05:50.290206 containerd[2017]: time="2025-11-08T00:05:50.289818598Z" level=info msg="StopPodSandbox for \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\"" Nov 8 00:05:50.291159 containerd[2017]: time="2025-11-08T00:05:50.290550310Z" level=info msg="Ensure that sandbox 394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2 in task-service has been cleanup successfully" Nov 8 00:05:50.378334 containerd[2017]: time="2025-11-08T00:05:50.378253223Z" level=error msg="StopPodSandbox for \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\" failed" error="failed to destroy network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.379111 kubelet[3480]: E1108 00:05:50.379059 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:05:50.379366 kubelet[3480]: E1108 00:05:50.379300 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278"} Nov 8 00:05:50.379520 kubelet[3480]: E1108 00:05:50.379491 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.379719 kubelet[3480]: E1108 00:05:50.379676 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58c68b6689-zrwvp" podUID="ea0c6231-9031-4a39-a396-da5bd45cdc1d" Nov 8 00:05:50.463346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304-shm.mount: Deactivated successfully. Nov 8 00:05:50.463587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a-shm.mount: Deactivated successfully. Nov 8 00:05:50.463736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34-shm.mount: Deactivated successfully. Nov 8 00:05:50.463874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2-shm.mount: Deactivated successfully. Nov 8 00:05:50.489230 containerd[2017]: time="2025-11-08T00:05:50.488337611Z" level=error msg="StopPodSandbox for \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\" failed" error="failed to destroy network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.489420 kubelet[3480]: E1108 00:05:50.488878 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:05:50.489420 kubelet[3480]: E1108 00:05:50.488947 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92"} Nov 8 00:05:50.489420 kubelet[3480]: E1108 00:05:50.489000 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33ee737d-9bb0-44ae-abd4-ed2fcc115154\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.489420 kubelet[3480]: E1108 00:05:50.489114 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33ee737d-9bb0-44ae-abd4-ed2fcc115154\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:05:50.502074 containerd[2017]: time="2025-11-08T00:05:50.501949560Z" level=error msg="StopPodSandbox for \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\" failed" error="failed to destroy network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.502764 kubelet[3480]: E1108 00:05:50.502585 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:05:50.502764 kubelet[3480]: E1108 00:05:50.502660 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36"} Nov 8 00:05:50.502764 kubelet[3480]: E1108 00:05:50.502713 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0f0a15a-d068-42d7-9057-db8aa3861ce8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.503407 kubelet[3480]: E1108 00:05:50.502758 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0f0a15a-d068-42d7-9057-db8aa3861ce8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:05:50.505440 containerd[2017]: time="2025-11-08T00:05:50.505285092Z" level=error msg="StopPodSandbox for \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\" failed" error="failed to destroy network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.506069 kubelet[3480]: E1108 00:05:50.505840 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:05:50.506069 kubelet[3480]: E1108 00:05:50.505913 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a"} Nov 8 00:05:50.506069 kubelet[3480]: E1108 00:05:50.505979 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"415c772f-4a8a-4df0-8713-cab5820f0205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.506069 kubelet[3480]: E1108 00:05:50.506058 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"415c772f-4a8a-4df0-8713-cab5820f0205\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:05:50.507676 containerd[2017]: time="2025-11-08T00:05:50.507147984Z" level=error msg="StopPodSandbox for \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\" failed" error="failed to destroy network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.508233 kubelet[3480]: E1108 00:05:50.507927 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:05:50.508233 kubelet[3480]: E1108 00:05:50.508002 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34"} Nov 8 00:05:50.508233 kubelet[3480]: E1108 00:05:50.508088 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec16be48-232b-457b-bf3a-4db776262475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.508233 kubelet[3480]: E1108 00:05:50.508139 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec16be48-232b-457b-bf3a-4db776262475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:05:50.513490 containerd[2017]: time="2025-11-08T00:05:50.513074004Z" level=error msg="StopPodSandbox for \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\" failed" error="failed to destroy network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.514819 kubelet[3480]: E1108 00:05:50.514681 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:05:50.514819 kubelet[3480]: E1108 00:05:50.514765 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac"} Nov 8 00:05:50.514819 kubelet[3480]: E1108 00:05:50.514820 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"105dae3d-b44c-41c4-b31a-bd1432c68a75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.515701 kubelet[3480]: E1108 00:05:50.514873 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"105dae3d-b44c-41c4-b31a-bd1432c68a75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:05:50.515701 kubelet[3480]: E1108 00:05:50.515496 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:05:50.515701 kubelet[3480]: E1108 00:05:50.515554 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304"} Nov 8 00:05:50.515701 kubelet[3480]: E1108 00:05:50.515601 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.516402 containerd[2017]: time="2025-11-08T00:05:50.515215248Z" level=error msg="StopPodSandbox for \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\" failed" error="failed to destroy network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.516489 kubelet[3480]: E1108 00:05:50.515643 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kvcmr" podUID="e6ec260d-5b9d-4d44-82ff-ca1893bb4d69" Nov 8 00:05:50.526928 containerd[2017]: time="2025-11-08T00:05:50.526809024Z" level=error msg="StopPodSandbox for \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\" failed" error="failed to destroy network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:05:50.527371 kubelet[3480]: E1108 00:05:50.527302 3480 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:05:50.527371 kubelet[3480]: E1108 00:05:50.527376 3480 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2"} Nov 8 00:05:50.527551 kubelet[3480]: E1108 00:05:50.527431 3480 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"049aea22-2859-4d2c-978e-0ff4ef7d540d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:05:50.527551 kubelet[3480]: E1108 00:05:50.527474 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"049aea22-2859-4d2c-978e-0ff4ef7d540d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-b447x" podUID="049aea22-2859-4d2c-978e-0ff4ef7d540d" Nov 8 00:05:58.163741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317039312.mount: Deactivated successfully. Nov 8 00:05:58.452558 containerd[2017]: time="2025-11-08T00:05:58.452058151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:58.456510 containerd[2017]: time="2025-11-08T00:05:58.456402007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 8 00:05:58.459309 containerd[2017]: time="2025-11-08T00:05:58.458877079Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:58.465904 containerd[2017]: time="2025-11-08T00:05:58.465843523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:05:58.468065 containerd[2017]: time="2025-11-08T00:05:58.467138515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 9.253561414s" Nov 8 00:05:58.468065 containerd[2017]: time="2025-11-08T00:05:58.467199487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 8 00:05:58.511836 containerd[2017]: time="2025-11-08T00:05:58.511776163Z" level=info msg="CreateContainer within sandbox \"373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:05:58.654383 containerd[2017]: time="2025-11-08T00:05:58.654283736Z" level=info msg="CreateContainer within sandbox \"373542ff35f5e66d794f4f78e3bf871106b2243a7a109255cd1033434c757114\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"21c9a23501ac171d6e4478aeb632701165c66097eac9c7859c3b0f9d4bc65c05\"" Nov 8 00:05:58.656637 containerd[2017]: time="2025-11-08T00:05:58.656328752Z" level=info msg="StartContainer for \"21c9a23501ac171d6e4478aeb632701165c66097eac9c7859c3b0f9d4bc65c05\"" Nov 8 00:05:58.738955 systemd[1]: Started cri-containerd-21c9a23501ac171d6e4478aeb632701165c66097eac9c7859c3b0f9d4bc65c05.scope - libcontainer container 21c9a23501ac171d6e4478aeb632701165c66097eac9c7859c3b0f9d4bc65c05. Nov 8 00:05:58.879252 containerd[2017]: time="2025-11-08T00:05:58.879176781Z" level=info msg="StartContainer for \"21c9a23501ac171d6e4478aeb632701165c66097eac9c7859c3b0f9d4bc65c05\" returns successfully" Nov 8 00:05:59.136141 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:05:59.136303 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:05:59.394207 containerd[2017]: time="2025-11-08T00:05:59.394004564Z" level=info msg="StopPodSandbox for \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\"" Nov 8 00:05:59.451053 kubelet[3480]: I1108 00:05:59.448589 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7nrhr" podStartSLOduration=1.9978262469999999 podStartE2EDuration="22.448564856s" podCreationTimestamp="2025-11-08 00:05:37 +0000 UTC" firstStartedPulling="2025-11-08 00:05:38.018340114 +0000 UTC m=+42.442030124" lastFinishedPulling="2025-11-08 00:05:58.469078711 +0000 UTC m=+62.892768733" observedRunningTime="2025-11-08 00:05:59.44712932 +0000 UTC m=+63.870819414" watchObservedRunningTime="2025-11-08 00:05:59.448564856 +0000 UTC m=+63.872254866" Nov 8 00:06:01.814084 kernel: bpftool[4880]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:06:01.862296 containerd[2017]: time="2025-11-08T00:06:01.862232604Z" level=info msg="StopPodSandbox for \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\"" Nov 8 00:06:02.305869 (udev-worker)[4914]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:06:02.309702 systemd-networkd[1937]: vxlan.calico: Link UP Nov 8 00:06:02.309724 systemd-networkd[1937]: vxlan.calico: Gained carrier Nov 8 00:06:02.374881 (udev-worker)[4922]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.710 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.804 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" iface="eth0" netns="/var/run/netns/cni-01ec1b82-a78a-edbb-a042-39b1e2d15788" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.805 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" iface="eth0" netns="/var/run/netns/cni-01ec1b82-a78a-edbb-a042-39b1e2d15788" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.903 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" iface="eth0" netns="/var/run/netns/cni-01ec1b82-a78a-edbb-a042-39b1e2d15788" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.903 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.903 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.987 [INFO][4972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.988 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:02.988 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:03.004 [WARNING][4972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:03.004 [INFO][4972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:03.007 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:03.019690 containerd[2017]: 2025-11-08 00:06:03.017 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:03.022430 containerd[2017]: time="2025-11-08T00:06:03.020533546Z" level=info msg="TearDown network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\" successfully" Nov 8 00:06:03.022430 containerd[2017]: time="2025-11-08T00:06:03.020575162Z" level=info msg="StopPodSandbox for \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\" returns successfully" Nov 8 00:06:03.026878 systemd[1]: run-netns-cni\x2d01ec1b82\x2da78a\x2dedbb\x2da042\x2d39b1e2d15788.mount: Deactivated successfully. Nov 8 00:06:03.032717 containerd[2017]: time="2025-11-08T00:06:03.032468950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b447x,Uid:049aea22-2859-4d2c-978e-0ff4ef7d540d,Namespace:kube-system,Attempt:1,}" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.707 [INFO][4683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.803 [INFO][4683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" iface="eth0" netns="/var/run/netns/cni-74352917-d25c-20e4-c5c9-6ee5efa0f625" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.804 [INFO][4683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" iface="eth0" netns="/var/run/netns/cni-74352917-d25c-20e4-c5c9-6ee5efa0f625" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.903 [INFO][4683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" iface="eth0" netns="/var/run/netns/cni-74352917-d25c-20e4-c5c9-6ee5efa0f625" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.903 [INFO][4683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.904 [INFO][4683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.988 [INFO][4973] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:02.989 [INFO][4973] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:03.007 [INFO][4973] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:03.029 [WARNING][4973] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:03.029 [INFO][4973] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:03.035 [INFO][4973] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:03.043407 containerd[2017]: 2025-11-08 00:06:03.038 [INFO][4683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:03.046591 containerd[2017]: time="2025-11-08T00:06:03.044139814Z" level=info msg="TearDown network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\" successfully" Nov 8 00:06:03.046591 containerd[2017]: time="2025-11-08T00:06:03.044183338Z" level=info msg="StopPodSandbox for \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\" returns successfully" Nov 8 00:06:03.051149 systemd[1]: run-netns-cni\x2d74352917\x2dd25c\x2d20e4\x2dc5c9\x2d6ee5efa0f625.mount: Deactivated successfully. Nov 8 00:06:03.250226 kubelet[3480]: I1108 00:06:03.250141 3480 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-backend-key-pair\") pod \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\" (UID: \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\") " Nov 8 00:06:03.251705 kubelet[3480]: I1108 00:06:03.250902 3480 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp227\" (UniqueName: \"kubernetes.io/projected/ea0c6231-9031-4a39-a396-da5bd45cdc1d-kube-api-access-lp227\") pod \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\" (UID: \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\") " Nov 8 00:06:03.251705 kubelet[3480]: I1108 00:06:03.251001 3480 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-ca-bundle\") pod \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\" (UID: \"ea0c6231-9031-4a39-a396-da5bd45cdc1d\") " Nov 8 00:06:03.252053 kubelet[3480]: I1108 00:06:03.251879 3480 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ea0c6231-9031-4a39-a396-da5bd45cdc1d" (UID: "ea0c6231-9031-4a39-a396-da5bd45cdc1d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:06:03.260560 kubelet[3480]: I1108 00:06:03.260487 3480 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ea0c6231-9031-4a39-a396-da5bd45cdc1d" (UID: "ea0c6231-9031-4a39-a396-da5bd45cdc1d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:06:03.262177 systemd[1]: var-lib-kubelet-pods-ea0c6231\x2d9031\x2d4a39\x2da396\x2dda5bd45cdc1d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:06:03.268654 kubelet[3480]: I1108 00:06:03.268412 3480 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea0c6231-9031-4a39-a396-da5bd45cdc1d-kube-api-access-lp227" (OuterVolumeSpecName: "kube-api-access-lp227") pod "ea0c6231-9031-4a39-a396-da5bd45cdc1d" (UID: "ea0c6231-9031-4a39-a396-da5bd45cdc1d"). InnerVolumeSpecName "kube-api-access-lp227". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:06:03.352638 kubelet[3480]: I1108 00:06:03.352355 3480 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-backend-key-pair\") on node \"ip-172-31-26-1\" DevicePath \"\"" Nov 8 00:06:03.352638 kubelet[3480]: I1108 00:06:03.352593 3480 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lp227\" (UniqueName: \"kubernetes.io/projected/ea0c6231-9031-4a39-a396-da5bd45cdc1d-kube-api-access-lp227\") on node \"ip-172-31-26-1\" DevicePath \"\"" Nov 8 00:06:03.354076 kubelet[3480]: I1108 00:06:03.353339 3480 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0c6231-9031-4a39-a396-da5bd45cdc1d-whisker-ca-bundle\") on node \"ip-172-31-26-1\" DevicePath \"\"" Nov 8 00:06:03.374715 systemd[1]: Removed slice kubepods-besteffort-podea0c6231_9031_4a39_a396_da5bd45cdc1d.slice - libcontainer container kubepods-besteffort-podea0c6231_9031_4a39_a396_da5bd45cdc1d.slice. Nov 8 00:06:03.416863 (udev-worker)[4937]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:06:03.423065 systemd-networkd[1937]: calia01eb682dd0: Link UP Nov 8 00:06:03.424332 systemd-networkd[1937]: calia01eb682dd0: Gained carrier Nov 8 00:06:03.532101 systemd[1]: Created slice kubepods-besteffort-pod964ee664_ff07_42fc_8d91_b078ca7f25c8.slice - libcontainer container kubepods-besteffort-pod964ee664_ff07_42fc_8d91_b078ca7f25c8.slice. Nov 8 00:06:03.557707 kubelet[3480]: I1108 00:06:03.557420 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/964ee664-ff07-42fc-8d91-b078ca7f25c8-whisker-backend-key-pair\") pod \"whisker-7d67fb7489-w2h7r\" (UID: \"964ee664-ff07-42fc-8d91-b078ca7f25c8\") " pod="calico-system/whisker-7d67fb7489-w2h7r" Nov 8 00:06:03.557707 kubelet[3480]: I1108 00:06:03.557488 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xnb\" (UniqueName: \"kubernetes.io/projected/964ee664-ff07-42fc-8d91-b078ca7f25c8-kube-api-access-f7xnb\") pod \"whisker-7d67fb7489-w2h7r\" (UID: \"964ee664-ff07-42fc-8d91-b078ca7f25c8\") " pod="calico-system/whisker-7d67fb7489-w2h7r" Nov 8 00:06:03.557707 kubelet[3480]: I1108 00:06:03.557531 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964ee664-ff07-42fc-8d91-b078ca7f25c8-whisker-ca-bundle\") pod \"whisker-7d67fb7489-w2h7r\" (UID: \"964ee664-ff07-42fc-8d91-b078ca7f25c8\") " pod="calico-system/whisker-7d67fb7489-w2h7r" Nov 8 00:06:03.817274 systemd-networkd[1937]: vxlan.calico: Gained IPv6LL Nov 8 00:06:03.851163 containerd[2017]: time="2025-11-08T00:06:03.849817370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d67fb7489-w2h7r,Uid:964ee664-ff07-42fc-8d91-b078ca7f25c8,Namespace:calico-system,Attempt:0,}" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.157 [INFO][4987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0 coredns-66bc5c9577- kube-system 049aea22-2859-4d2c-978e-0ff4ef7d540d 972 0 2025-11-08 00:05:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-1 coredns-66bc5c9577-b447x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia01eb682dd0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.157 [INFO][4987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.319 [INFO][5001] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" HandleID="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.319 [INFO][5001] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" HandleID="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-1", "pod":"coredns-66bc5c9577-b447x", "timestamp":"2025-11-08 00:06:03.319500863 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.319 [INFO][5001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.319 [INFO][5001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.320 [INFO][5001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.334 [INFO][5001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.343 [INFO][5001] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.351 [INFO][5001] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.358 [INFO][5001] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.365 [INFO][5001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.365 [INFO][5001] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.369 [INFO][5001] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277 Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.380 [INFO][5001] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.406 [INFO][5001] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.193/26] block=192.168.104.192/26 handle="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.406 [INFO][5001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.193/26] handle="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" host="ip-172-31-26-1" Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.406 [INFO][5001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:03.854904 containerd[2017]: 2025-11-08 00:06:03.406 [INFO][5001] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.193/26] IPv6=[] ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" HandleID="k8s-pod-network.955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.859162 containerd[2017]: 2025-11-08 00:06:03.410 [INFO][4987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"049aea22-2859-4d2c-978e-0ff4ef7d540d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"coredns-66bc5c9577-b447x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01eb682dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:03.859162 containerd[2017]: 2025-11-08 00:06:03.411 [INFO][4987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.193/32] ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.859162 containerd[2017]: 2025-11-08 00:06:03.411 [INFO][4987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia01eb682dd0 ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.859162 containerd[2017]: 2025-11-08 00:06:03.526 [INFO][4987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.859162 containerd[2017]: 2025-11-08 00:06:03.528 [INFO][4987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"049aea22-2859-4d2c-978e-0ff4ef7d540d", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277", Pod:"coredns-66bc5c9577-b447x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01eb682dd0", MAC:"32:0c:2c:df:60:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:03.859162 containerd[2017]: 2025-11-08 00:06:03.845 [INFO][4987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277" Namespace="kube-system" Pod="coredns-66bc5c9577-b447x" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:03.868048 containerd[2017]: time="2025-11-08T00:06:03.864173402Z" level=info msg="StopPodSandbox for \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\"" Nov 8 00:06:03.870086 containerd[2017]: time="2025-11-08T00:06:03.868427174Z" level=info msg="StopPodSandbox for \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\"" Nov 8 00:06:03.871186 containerd[2017]: time="2025-11-08T00:06:03.871049450Z" level=info msg="StopPodSandbox for \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\"" Nov 8 00:06:03.873501 containerd[2017]: time="2025-11-08T00:06:03.873422822Z" level=info msg="StopPodSandbox for \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\"" Nov 8 00:06:03.885500 containerd[2017]: time="2025-11-08T00:06:03.885444878Z" level=info msg="StopPodSandbox for \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\"" Nov 8 00:06:03.899469 kubelet[3480]: I1108 00:06:03.899182 3480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea0c6231-9031-4a39-a396-da5bd45cdc1d" path="/var/lib/kubelet/pods/ea0c6231-9031-4a39-a396-da5bd45cdc1d/volumes" Nov 8 00:06:04.046965 systemd[1]: var-lib-kubelet-pods-ea0c6231\x2d9031\x2d4a39\x2da396\x2dda5bd45cdc1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlp227.mount: Deactivated successfully. Nov 8 00:06:04.100960 containerd[2017]: time="2025-11-08T00:06:04.098619683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:04.100960 containerd[2017]: time="2025-11-08T00:06:04.098741447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:04.100960 containerd[2017]: time="2025-11-08T00:06:04.098797031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:04.117406 containerd[2017]: time="2025-11-08T00:06:04.117262199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:04.234225 systemd[1]: Started cri-containerd-955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277.scope - libcontainer container 955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277. Nov 8 00:06:04.494714 containerd[2017]: time="2025-11-08T00:06:04.494353693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b447x,Uid:049aea22-2859-4d2c-978e-0ff4ef7d540d,Namespace:kube-system,Attempt:1,} returns sandbox id \"955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277\"" Nov 8 00:06:04.514069 containerd[2017]: time="2025-11-08T00:06:04.513689053Z" level=info msg="CreateContainer within sandbox \"955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:06:04.672425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582316062.mount: Deactivated successfully. Nov 8 00:06:04.700413 containerd[2017]: time="2025-11-08T00:06:04.700335602Z" level=info msg="CreateContainer within sandbox \"955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ef8da1e8dce69e2507c837eaeec8f0f8e29cf7da030cf53199b8276928ef28f\"" Nov 8 00:06:04.702827 containerd[2017]: time="2025-11-08T00:06:04.702749810Z" level=info msg="StartContainer for \"2ef8da1e8dce69e2507c837eaeec8f0f8e29cf7da030cf53199b8276928ef28f\"" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.430 [INFO][5061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.432 [INFO][5061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" iface="eth0" netns="/var/run/netns/cni-012047f2-181b-74be-0906-7494479c7e3c" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.434 [INFO][5061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" iface="eth0" netns="/var/run/netns/cni-012047f2-181b-74be-0906-7494479c7e3c" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.437 [INFO][5061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" iface="eth0" netns="/var/run/netns/cni-012047f2-181b-74be-0906-7494479c7e3c" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.437 [INFO][5061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.437 [INFO][5061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.637 [INFO][5152] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.639 [INFO][5152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.640 [INFO][5152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.683 [WARNING][5152] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.683 [INFO][5152] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.706 [INFO][5152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:04.731722 containerd[2017]: 2025-11-08 00:06:04.713 [INFO][5061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:04.733262 containerd[2017]: time="2025-11-08T00:06:04.732316214Z" level=info msg="TearDown network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\" successfully" Nov 8 00:06:04.733262 containerd[2017]: time="2025-11-08T00:06:04.732389078Z" level=info msg="StopPodSandbox for \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\" returns successfully" Nov 8 00:06:04.738793 containerd[2017]: time="2025-11-08T00:06:04.738608654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvcmr,Uid:e6ec260d-5b9d-4d44-82ff-ca1893bb4d69,Namespace:kube-system,Attempt:1,}" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.530 [INFO][5058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.531 [INFO][5058] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" iface="eth0" netns="/var/run/netns/cni-622afecf-f054-36ae-4bab-cd449b8a6a40" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.532 [INFO][5058] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" iface="eth0" netns="/var/run/netns/cni-622afecf-f054-36ae-4bab-cd449b8a6a40" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.532 [INFO][5058] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" iface="eth0" netns="/var/run/netns/cni-622afecf-f054-36ae-4bab-cd449b8a6a40" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.533 [INFO][5058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.533 [INFO][5058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.679 [INFO][5165] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.681 [INFO][5165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.708 [INFO][5165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.741 [WARNING][5165] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.742 [INFO][5165] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.746 [INFO][5165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:04.767573 containerd[2017]: 2025-11-08 00:06:04.755 [INFO][5058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:04.767573 containerd[2017]: time="2025-11-08T00:06:04.766937414Z" level=info msg="TearDown network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\" successfully" Nov 8 00:06:04.767573 containerd[2017]: time="2025-11-08T00:06:04.766976858Z" level=info msg="StopPodSandbox for \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\" returns successfully" Nov 8 00:06:04.773496 containerd[2017]: time="2025-11-08T00:06:04.772985786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf595bbd4-gjrhj,Uid:33ee737d-9bb0-44ae-abd4-ed2fcc115154,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:04.777492 systemd-networkd[1937]: calia01eb682dd0: Gained IPv6LL Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.426 [INFO][5063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.431 [INFO][5063] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" iface="eth0" netns="/var/run/netns/cni-b8fbc8b4-4cbc-95fe-bbaa-070e400d32c0" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.432 [INFO][5063] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" iface="eth0" netns="/var/run/netns/cni-b8fbc8b4-4cbc-95fe-bbaa-070e400d32c0" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.436 [INFO][5063] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" iface="eth0" netns="/var/run/netns/cni-b8fbc8b4-4cbc-95fe-bbaa-070e400d32c0" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.436 [INFO][5063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.436 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.715 [INFO][5151] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.718 [INFO][5151] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.747 [INFO][5151] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.786 [WARNING][5151] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.786 [INFO][5151] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.792 [INFO][5151] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:04.813730 containerd[2017]: 2025-11-08 00:06:04.800 [INFO][5063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:04.814672 containerd[2017]: time="2025-11-08T00:06:04.813825615Z" level=info msg="TearDown network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\" successfully" Nov 8 00:06:04.814672 containerd[2017]: time="2025-11-08T00:06:04.813884895Z" level=info msg="StopPodSandbox for \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\" returns successfully" Nov 8 00:06:04.830947 containerd[2017]: time="2025-11-08T00:06:04.821308263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rdhgb,Uid:415c772f-4a8a-4df0-8713-cab5820f0205,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:04.860508 containerd[2017]: time="2025-11-08T00:06:04.860439867Z" level=info msg="StopPodSandbox for \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\"" Nov 8 00:06:04.891938 systemd[1]: Started cri-containerd-2ef8da1e8dce69e2507c837eaeec8f0f8e29cf7da030cf53199b8276928ef28f.scope - libcontainer container 2ef8da1e8dce69e2507c837eaeec8f0f8e29cf7da030cf53199b8276928ef28f. Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.485 [INFO][5065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.491 [INFO][5065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" iface="eth0" netns="/var/run/netns/cni-981b6eb4-2153-b1a2-7be6-b77ab33886ac" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.493 [INFO][5065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" iface="eth0" netns="/var/run/netns/cni-981b6eb4-2153-b1a2-7be6-b77ab33886ac" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.500 [INFO][5065] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" iface="eth0" netns="/var/run/netns/cni-981b6eb4-2153-b1a2-7be6-b77ab33886ac" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.500 [INFO][5065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.500 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.724 [INFO][5162] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.724 [INFO][5162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.792 [INFO][5162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.843 [WARNING][5162] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.844 [INFO][5162] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.850 [INFO][5162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:04.896163 containerd[2017]: 2025-11-08 00:06:04.877 [INFO][5065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:04.899801 containerd[2017]: time="2025-11-08T00:06:04.896943087Z" level=info msg="TearDown network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\" successfully" Nov 8 00:06:04.899801 containerd[2017]: time="2025-11-08T00:06:04.897098211Z" level=info msg="StopPodSandbox for \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\" returns successfully" Nov 8 00:06:04.910796 containerd[2017]: time="2025-11-08T00:06:04.910691835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-cbxnw,Uid:ec16be48-232b-457b-bf3a-4db776262475,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:06:05.050671 systemd[1]: run-netns-cni\x2d012047f2\x2d181b\x2d74be\x2d0906\x2d7494479c7e3c.mount: Deactivated successfully. Nov 8 00:06:05.050896 systemd[1]: run-netns-cni\x2db8fbc8b4\x2d4cbc\x2d95fe\x2dbbaa\x2d070e400d32c0.mount: Deactivated successfully. Nov 8 00:06:05.051069 systemd[1]: run-netns-cni\x2d981b6eb4\x2d2153\x2db1a2\x2d7be6\x2db77ab33886ac.mount: Deactivated successfully. Nov 8 00:06:05.051228 systemd[1]: run-netns-cni\x2d622afecf\x2df054\x2d36ae\x2d4bab\x2dcd449b8a6a40.mount: Deactivated successfully. Nov 8 00:06:05.120240 containerd[2017]: time="2025-11-08T00:06:05.119796396Z" level=info msg="StartContainer for \"2ef8da1e8dce69e2507c837eaeec8f0f8e29cf7da030cf53199b8276928ef28f\" returns successfully" Nov 8 00:06:05.133150 systemd-networkd[1937]: calic399dcf364c: Link UP Nov 8 00:06:05.140777 systemd-networkd[1937]: calic399dcf364c: Gained carrier Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.618 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.621 [INFO][5064] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" iface="eth0" netns="/var/run/netns/cni-0f87d255-3949-8c84-b10c-1847dbd27b4c" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.623 [INFO][5064] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" iface="eth0" netns="/var/run/netns/cni-0f87d255-3949-8c84-b10c-1847dbd27b4c" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.626 [INFO][5064] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" iface="eth0" netns="/var/run/netns/cni-0f87d255-3949-8c84-b10c-1847dbd27b4c" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.626 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.626 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.848 [INFO][5181] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:04.851 [INFO][5181] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:05.069 [INFO][5181] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:05.118 [WARNING][5181] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:05.118 [INFO][5181] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:05.140 [INFO][5181] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:05.177165 containerd[2017]: 2025-11-08 00:06:05.164 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:05.180782 containerd[2017]: time="2025-11-08T00:06:05.179506044Z" level=info msg="TearDown network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\" successfully" Nov 8 00:06:05.180782 containerd[2017]: time="2025-11-08T00:06:05.179851956Z" level=info msg="StopPodSandbox for \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\" returns successfully" Nov 8 00:06:05.186962 systemd[1]: run-netns-cni\x2d0f87d255\x2d3949\x2d8c84\x2db10c\x2d1847dbd27b4c.mount: Deactivated successfully. Nov 8 00:06:05.194271 containerd[2017]: time="2025-11-08T00:06:05.194159545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-2hjdp,Uid:a0f0a15a-d068-42d7-9057-db8aa3861ce8,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.514 [INFO][5076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0 whisker-7d67fb7489- calico-system 964ee664-ff07-42fc-8d91-b078ca7f25c8 989 0 2025-11-08 00:06:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d67fb7489 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-1 whisker-7d67fb7489-w2h7r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic399dcf364c [] [] }} ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.520 [INFO][5076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.776 [INFO][5174] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" HandleID="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Workload="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.779 [INFO][5174] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" HandleID="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Workload="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001201c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-1", "pod":"whisker-7d67fb7489-w2h7r", "timestamp":"2025-11-08 00:06:04.77626673 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.779 [INFO][5174] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.851 [INFO][5174] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.852 [INFO][5174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.919 [INFO][5174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.943 [INFO][5174] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.964 [INFO][5174] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.971 [INFO][5174] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.982 [INFO][5174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.983 [INFO][5174] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:04.988 [INFO][5174] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655 Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:05.013 [INFO][5174] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:05.065 [INFO][5174] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.194/26] block=192.168.104.192/26 handle="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:05.068 [INFO][5174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.194/26] handle="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" host="ip-172-31-26-1" Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:05.071 [INFO][5174] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:05.237546 containerd[2017]: 2025-11-08 00:06:05.073 [INFO][5174] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.194/26] IPv6=[] ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" HandleID="k8s-pod-network.cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Workload="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" Nov 8 00:06:05.238808 containerd[2017]: 2025-11-08 00:06:05.097 [INFO][5076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0", GenerateName:"whisker-7d67fb7489-", Namespace:"calico-system", SelfLink:"", UID:"964ee664-ff07-42fc-8d91-b078ca7f25c8", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d67fb7489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"whisker-7d67fb7489-w2h7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic399dcf364c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:05.238808 containerd[2017]: 2025-11-08 00:06:05.097 [INFO][5076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.194/32] ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" Nov 8 00:06:05.238808 containerd[2017]: 2025-11-08 00:06:05.097 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic399dcf364c ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" Nov 8 00:06:05.238808 containerd[2017]: 2025-11-08 00:06:05.150 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" Nov 8 00:06:05.238808 containerd[2017]: 2025-11-08 00:06:05.168 [INFO][5076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0", GenerateName:"whisker-7d67fb7489-", Namespace:"calico-system", SelfLink:"", UID:"964ee664-ff07-42fc-8d91-b078ca7f25c8", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d67fb7489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655", Pod:"whisker-7d67fb7489-w2h7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic399dcf364c", MAC:"46:b4:0e:8f:6b:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:05.238808 containerd[2017]: 2025-11-08 00:06:05.221 [INFO][5076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655" Namespace="calico-system" Pod="whisker-7d67fb7489-w2h7r" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--7d67fb7489--w2h7r-eth0" Nov 8 00:06:05.515627 kubelet[3480]: I1108 00:06:05.513430 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b447x" podStartSLOduration=64.513402266 podStartE2EDuration="1m4.513402266s" podCreationTimestamp="2025-11-08 00:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:05.508500698 +0000 UTC m=+69.932190720" watchObservedRunningTime="2025-11-08 00:06:05.513402266 +0000 UTC m=+69.937092264" Nov 8 00:06:05.571128 containerd[2017]: time="2025-11-08T00:06:05.569530418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:05.571128 containerd[2017]: time="2025-11-08T00:06:05.569931758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:05.571974 containerd[2017]: time="2025-11-08T00:06:05.569986214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:05.571974 containerd[2017]: time="2025-11-08T00:06:05.571652450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:05.695412 systemd[1]: Started cri-containerd-cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655.scope - libcontainer container cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655. Nov 8 00:06:05.917959 systemd-networkd[1937]: cali1cb2565c823: Link UP Nov 8 00:06:05.928325 systemd-networkd[1937]: cali1cb2565c823: Gained carrier Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.294 [INFO][5246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.302 [INFO][5246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" iface="eth0" netns="/var/run/netns/cni-5a21caac-45d7-fa24-420c-be6431d0664d" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.309 [INFO][5246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" iface="eth0" netns="/var/run/netns/cni-5a21caac-45d7-fa24-420c-be6431d0664d" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.310 [INFO][5246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" iface="eth0" netns="/var/run/netns/cni-5a21caac-45d7-fa24-420c-be6431d0664d" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.310 [INFO][5246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.310 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.706 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.707 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.897 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.943 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.943 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.972 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:06.005882 containerd[2017]: 2025-11-08 00:06:05.999 [INFO][5246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:06.009058 containerd[2017]: time="2025-11-08T00:06:06.008053753Z" level=info msg="TearDown network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\" successfully" Nov 8 00:06:06.009058 containerd[2017]: time="2025-11-08T00:06:06.008143849Z" level=info msg="StopPodSandbox for \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\" returns successfully" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.233 [INFO][5204] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0 coredns-66bc5c9577- kube-system e6ec260d-5b9d-4d44-82ff-ca1893bb4d69 998 0 2025-11-08 00:05:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-1 coredns-66bc5c9577-kvcmr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1cb2565c823 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.236 [INFO][5204] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.702 [INFO][5298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" HandleID="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.707 [INFO][5298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" HandleID="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005f4d60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-1", "pod":"coredns-66bc5c9577-kvcmr", "timestamp":"2025-11-08 00:06:05.702927015 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.707 [INFO][5298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.707 [INFO][5298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.707 [INFO][5298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.749 [INFO][5298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.775 [INFO][5298] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.811 [INFO][5298] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.833 [INFO][5298] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.842 [INFO][5298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.843 [INFO][5298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.849 [INFO][5298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052 Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.866 [INFO][5298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.896 [INFO][5298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.195/26] block=192.168.104.192/26 handle="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.896 [INFO][5298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.195/26] handle="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" host="ip-172-31-26-1" Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.897 [INFO][5298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:06.009058 containerd[2017]: 2025-11-08 00:06:05.898 [INFO][5298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.195/26] IPv6=[] ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" HandleID="k8s-pod-network.50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:06.013812 containerd[2017]: 2025-11-08 00:06:05.910 [INFO][5204] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"coredns-66bc5c9577-kvcmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb2565c823", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.013812 containerd[2017]: 2025-11-08 00:06:05.910 [INFO][5204] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.195/32] ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:06.013812 containerd[2017]: 2025-11-08 00:06:05.912 [INFO][5204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cb2565c823 ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:06.013812 containerd[2017]: 2025-11-08 00:06:05.932 [INFO][5204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:06.013812 containerd[2017]: 2025-11-08 00:06:05.941 [INFO][5204] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052", Pod:"coredns-66bc5c9577-kvcmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb2565c823", MAC:"76:d8:2d:86:9c:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.013812 containerd[2017]: 2025-11-08 00:06:05.989 [INFO][5204] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052" Namespace="kube-system" Pod="coredns-66bc5c9577-kvcmr" WorkloadEndpoint="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:06.015543 containerd[2017]: time="2025-11-08T00:06:06.014877589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkzrr,Uid:105dae3d-b44c-41c4-b31a-bd1432c68a75,Namespace:calico-system,Attempt:1,}" Nov 8 00:06:06.030424 systemd[1]: run-netns-cni\x2d5a21caac\x2d45d7\x2dfa24\x2d420c\x2dbe6431d0664d.mount: Deactivated successfully. Nov 8 00:06:06.039454 containerd[2017]: time="2025-11-08T00:06:06.038678077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d67fb7489-w2h7r,Uid:964ee664-ff07-42fc-8d91-b078ca7f25c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cea665f77b7a39a8ca7c70bbaaa998eeab15a597619112460c656eb234f40655\"" Nov 8 00:06:06.049293 containerd[2017]: time="2025-11-08T00:06:06.049240429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:06:06.208365 systemd-networkd[1937]: cali9f7c1863073: Link UP Nov 8 00:06:06.217402 systemd-networkd[1937]: cali9f7c1863073: Gained carrier Nov 8 00:06:06.233635 containerd[2017]: time="2025-11-08T00:06:06.232952630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:06.240501 containerd[2017]: time="2025-11-08T00:06:06.235706102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:06.240501 containerd[2017]: time="2025-11-08T00:06:06.238630862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:06.240501 containerd[2017]: time="2025-11-08T00:06:06.239324870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:05.504 [INFO][5222] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0 calico-kube-controllers-6cf595bbd4- calico-system 33ee737d-9bb0-44ae-abd4-ed2fcc115154 1004 0 2025-11-08 00:05:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cf595bbd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-1 calico-kube-controllers-6cf595bbd4-gjrhj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9f7c1863073 [] [] }} ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:05.504 [INFO][5222] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:05.836 [INFO][5348] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" HandleID="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:05.838 [INFO][5348] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" HandleID="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-1", "pod":"calico-kube-controllers-6cf595bbd4-gjrhj", "timestamp":"2025-11-08 00:06:05.836711656 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:05.838 [INFO][5348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:05.973 [INFO][5348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:05.973 [INFO][5348] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.036 [INFO][5348] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.053 [INFO][5348] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.071 [INFO][5348] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.078 [INFO][5348] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.107 [INFO][5348] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.107 [INFO][5348] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.126 [INFO][5348] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.145 [INFO][5348] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.164 [INFO][5348] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.196/26] block=192.168.104.192/26 handle="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.164 [INFO][5348] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.196/26] handle="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" host="ip-172-31-26-1" Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.166 [INFO][5348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:06.293597 containerd[2017]: 2025-11-08 00:06:06.168 [INFO][5348] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.196/26] IPv6=[] ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" HandleID="k8s-pod-network.c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:06.294766 containerd[2017]: 2025-11-08 00:06:06.184 [INFO][5222] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0", GenerateName:"calico-kube-controllers-6cf595bbd4-", Namespace:"calico-system", SelfLink:"", UID:"33ee737d-9bb0-44ae-abd4-ed2fcc115154", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf595bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"calico-kube-controllers-6cf595bbd4-gjrhj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f7c1863073", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.294766 containerd[2017]: 2025-11-08 00:06:06.184 [INFO][5222] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.196/32] ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:06.294766 containerd[2017]: 2025-11-08 00:06:06.184 [INFO][5222] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f7c1863073 ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:06.294766 containerd[2017]: 2025-11-08 00:06:06.223 [INFO][5222] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:06.294766 containerd[2017]: 2025-11-08 00:06:06.231 [INFO][5222] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0", GenerateName:"calico-kube-controllers-6cf595bbd4-", Namespace:"calico-system", SelfLink:"", UID:"33ee737d-9bb0-44ae-abd4-ed2fcc115154", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf595bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c", Pod:"calico-kube-controllers-6cf595bbd4-gjrhj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f7c1863073", MAC:"6a:57:62:0e:6e:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.294766 containerd[2017]: 2025-11-08 00:06:06.279 [INFO][5222] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c" Namespace="calico-system" Pod="calico-kube-controllers-6cf595bbd4-gjrhj" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:06.322331 systemd[1]: Started cri-containerd-50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052.scope - libcontainer container 50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052. Nov 8 00:06:06.457573 containerd[2017]: time="2025-11-08T00:06:06.456431139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:06.457573 containerd[2017]: time="2025-11-08T00:06:06.456549519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:06.457573 containerd[2017]: time="2025-11-08T00:06:06.456575847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:06.457573 containerd[2017]: time="2025-11-08T00:06:06.456747939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:06.465745 containerd[2017]: time="2025-11-08T00:06:06.459322443Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:06.470610 systemd-networkd[1937]: cali1ec342cd486: Link UP Nov 8 00:06:06.477669 containerd[2017]: time="2025-11-08T00:06:06.477600891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:06:06.478399 containerd[2017]: time="2025-11-08T00:06:06.478076079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:06:06.480234 kubelet[3480]: E1108 00:06:06.479125 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:06.480234 kubelet[3480]: E1108 00:06:06.479204 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:06.480234 kubelet[3480]: E1108 00:06:06.479316 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:06.479928 systemd-networkd[1937]: cali1ec342cd486: Gained carrier Nov 8 00:06:06.491048 containerd[2017]: time="2025-11-08T00:06:06.488216463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:05.484 [INFO][5268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0 calico-apiserver-f558bfb5c- calico-apiserver ec16be48-232b-457b-bf3a-4db776262475 1001 0 2025-11-08 00:05:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f558bfb5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-1 calico-apiserver-f558bfb5c-cbxnw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1ec342cd486 [] [] }} ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:05.490 [INFO][5268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:05.834 [INFO][5341] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" HandleID="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:05.837 [INFO][5341] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" HandleID="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-1", "pod":"calico-apiserver-f558bfb5c-cbxnw", "timestamp":"2025-11-08 00:06:05.834682864 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:05.837 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.165 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.167 [INFO][5341] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.203 [INFO][5341] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.231 [INFO][5341] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.257 [INFO][5341] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.278 [INFO][5341] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.292 [INFO][5341] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.299 [INFO][5341] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.338 [INFO][5341] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3 Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.366 [INFO][5341] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.391 [INFO][5341] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.197/26] block=192.168.104.192/26 handle="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.394 [INFO][5341] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.197/26] handle="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" host="ip-172-31-26-1" Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.396 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:06.580894 containerd[2017]: 2025-11-08 00:06:06.397 [INFO][5341] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.197/26] IPv6=[] ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" HandleID="k8s-pod-network.b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:06.583745 containerd[2017]: 2025-11-08 00:06:06.416 [INFO][5268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec16be48-232b-457b-bf3a-4db776262475", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"calico-apiserver-f558bfb5c-cbxnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ec342cd486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.583745 containerd[2017]: 2025-11-08 00:06:06.416 [INFO][5268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.197/32] ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:06.583745 containerd[2017]: 2025-11-08 00:06:06.416 [INFO][5268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ec342cd486 ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:06.583745 containerd[2017]: 2025-11-08 00:06:06.491 [INFO][5268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:06.583745 containerd[2017]: 2025-11-08 00:06:06.506 [INFO][5268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec16be48-232b-457b-bf3a-4db776262475", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3", Pod:"calico-apiserver-f558bfb5c-cbxnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ec342cd486", MAC:"fe:2d:9c:c1:c3:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.583745 containerd[2017]: 2025-11-08 00:06:06.571 [INFO][5268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-cbxnw" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:06.617390 systemd[1]: Started cri-containerd-c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c.scope - libcontainer container c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c. Nov 8 00:06:06.654554 containerd[2017]: time="2025-11-08T00:06:06.654104836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kvcmr,Uid:e6ec260d-5b9d-4d44-82ff-ca1893bb4d69,Namespace:kube-system,Attempt:1,} returns sandbox id \"50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052\"" Nov 8 00:06:06.671858 containerd[2017]: time="2025-11-08T00:06:06.671689504Z" level=info msg="CreateContainer within sandbox \"50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:06:06.735162 systemd-networkd[1937]: cali3106f1938e2: Link UP Nov 8 00:06:06.749579 systemd-networkd[1937]: cali3106f1938e2: Gained carrier Nov 8 00:06:06.789510 containerd[2017]: time="2025-11-08T00:06:06.789435448Z" level=info msg="CreateContainer within sandbox \"50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86120a2b0df83c50138cd29773737d6fbccd1c585f2c0e8df7af8c3e28c2a729\"" Nov 8 00:06:06.794781 containerd[2017]: time="2025-11-08T00:06:06.793525960Z" level=info msg="StartContainer for \"86120a2b0df83c50138cd29773737d6fbccd1c585f2c0e8df7af8c3e28c2a729\"" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:05.345 [INFO][5248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0 goldmane-7c778bb748- calico-system 415c772f-4a8a-4df0-8713-cab5820f0205 999 0 2025-11-08 00:05:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-1 goldmane-7c778bb748-rdhgb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3106f1938e2 [] [] }} ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:05.346 [INFO][5248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:05.840 [INFO][5317] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" HandleID="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:05.842 [INFO][5317] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" HandleID="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037faa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-1", "pod":"goldmane-7c778bb748-rdhgb", "timestamp":"2025-11-08 00:06:05.840249304 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:05.843 [INFO][5317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.397 [INFO][5317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.398 [INFO][5317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.472 [INFO][5317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.515 [INFO][5317] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.541 [INFO][5317] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.556 [INFO][5317] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.579 [INFO][5317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.586 [INFO][5317] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.600 [INFO][5317] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4 Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.634 [INFO][5317] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.678 [INFO][5317] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.198/26] block=192.168.104.192/26 handle="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.679 [INFO][5317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.198/26] handle="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" host="ip-172-31-26-1" Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.680 [INFO][5317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:06.830945 containerd[2017]: 2025-11-08 00:06:06.681 [INFO][5317] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.198/26] IPv6=[] ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" HandleID="k8s-pod-network.a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:06.833607 containerd[2017]: 2025-11-08 00:06:06.715 [INFO][5248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"415c772f-4a8a-4df0-8713-cab5820f0205", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"goldmane-7c778bb748-rdhgb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3106f1938e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.833607 containerd[2017]: 2025-11-08 00:06:06.719 [INFO][5248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.198/32] ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:06.833607 containerd[2017]: 2025-11-08 00:06:06.719 [INFO][5248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3106f1938e2 ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:06.833607 containerd[2017]: 2025-11-08 00:06:06.761 [INFO][5248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:06.833607 containerd[2017]: 2025-11-08 00:06:06.766 [INFO][5248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"415c772f-4a8a-4df0-8713-cab5820f0205", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4", Pod:"goldmane-7c778bb748-rdhgb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3106f1938e2", MAC:"72:e2:ef:ea:38:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:06.833607 containerd[2017]: 2025-11-08 00:06:06.812 [INFO][5248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4" Namespace="calico-system" Pod="goldmane-7c778bb748-rdhgb" WorkloadEndpoint="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:06.863224 containerd[2017]: time="2025-11-08T00:06:06.862313549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:06.863521 containerd[2017]: time="2025-11-08T00:06:06.863261117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:06.864109 containerd[2017]: time="2025-11-08T00:06:06.863804177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:06.864478 containerd[2017]: time="2025-11-08T00:06:06.864402797Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:06.865516 containerd[2017]: time="2025-11-08T00:06:06.865391477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:06.873217 containerd[2017]: time="2025-11-08T00:06:06.872983193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:06:06.874051 kubelet[3480]: E1108 00:06:06.873928 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:06.874051 kubelet[3480]: E1108 00:06:06.874000 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:06.874669 kubelet[3480]: E1108 00:06:06.874134 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:06.874669 kubelet[3480]: E1108 00:06:06.874200 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:06:06.878102 containerd[2017]: time="2025-11-08T00:06:06.877063793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:06.941638 systemd-networkd[1937]: cali65b4dc9727c: Link UP Nov 8 00:06:06.946152 systemd-networkd[1937]: cali65b4dc9727c: Gained carrier Nov 8 00:06:07.010311 systemd[1]: Started cri-containerd-86120a2b0df83c50138cd29773737d6fbccd1c585f2c0e8df7af8c3e28c2a729.scope - libcontainer container 86120a2b0df83c50138cd29773737d6fbccd1c585f2c0e8df7af8c3e28c2a729. Nov 8 00:06:07.017283 systemd-networkd[1937]: calic399dcf364c: Gained IPv6LL Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:05.848 [INFO][5311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0 calico-apiserver-f558bfb5c- calico-apiserver a0f0a15a-d068-42d7-9057-db8aa3861ce8 1005 0 2025-11-08 00:05:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f558bfb5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-1 calico-apiserver-f558bfb5c-2hjdp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65b4dc9727c [] [] }} ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:05.851 [INFO][5311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.142 [INFO][5387] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" HandleID="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.143 [INFO][5387] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" HandleID="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121c60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-1", "pod":"calico-apiserver-f558bfb5c-2hjdp", "timestamp":"2025-11-08 00:06:06.142191685 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.145 [INFO][5387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.682 [INFO][5387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.682 [INFO][5387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.734 [INFO][5387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.753 [INFO][5387] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.788 [INFO][5387] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.798 [INFO][5387] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.833 [INFO][5387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.833 [INFO][5387] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.841 [INFO][5387] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7 Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.868 [INFO][5387] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.900 [INFO][5387] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.199/26] block=192.168.104.192/26 handle="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.900 [INFO][5387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.199/26] handle="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" host="ip-172-31-26-1" Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.900 [INFO][5387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:07.040625 containerd[2017]: 2025-11-08 00:06:06.900 [INFO][5387] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.199/26] IPv6=[] ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" HandleID="k8s-pod-network.818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:07.049953 containerd[2017]: 2025-11-08 00:06:06.914 [INFO][5311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0f0a15a-d068-42d7-9057-db8aa3861ce8", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"calico-apiserver-f558bfb5c-2hjdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b4dc9727c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:07.049953 containerd[2017]: 2025-11-08 00:06:06.914 [INFO][5311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.199/32] ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:07.049953 containerd[2017]: 2025-11-08 00:06:06.915 [INFO][5311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65b4dc9727c ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:07.049953 containerd[2017]: 2025-11-08 00:06:06.955 [INFO][5311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:07.049953 containerd[2017]: 2025-11-08 00:06:06.963 [INFO][5311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0f0a15a-d068-42d7-9057-db8aa3861ce8", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7", Pod:"calico-apiserver-f558bfb5c-2hjdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b4dc9727c", MAC:"ba:3c:a9:dc:23:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:07.049953 containerd[2017]: 2025-11-08 00:06:07.007 [INFO][5311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7" Namespace="calico-apiserver" Pod="calico-apiserver-f558bfb5c-2hjdp" WorkloadEndpoint="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:07.104966 containerd[2017]: time="2025-11-08T00:06:07.102692786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:07.104966 containerd[2017]: time="2025-11-08T00:06:07.102794606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:07.104966 containerd[2017]: time="2025-11-08T00:06:07.102824018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:07.104966 containerd[2017]: time="2025-11-08T00:06:07.104187134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:07.116228 systemd[1]: Started cri-containerd-b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3.scope - libcontainer container b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3. Nov 8 00:06:07.248704 systemd[1]: Started cri-containerd-a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4.scope - libcontainer container a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4. Nov 8 00:06:07.279125 containerd[2017]: time="2025-11-08T00:06:07.278871675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf595bbd4-gjrhj,Uid:33ee737d-9bb0-44ae-abd4-ed2fcc115154,Namespace:calico-system,Attempt:1,} returns sandbox id \"c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c\"" Nov 8 00:06:07.294478 containerd[2017]: time="2025-11-08T00:06:07.294375723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:07.297170 containerd[2017]: time="2025-11-08T00:06:07.297102411Z" level=info msg="StartContainer for \"86120a2b0df83c50138cd29773737d6fbccd1c585f2c0e8df7af8c3e28c2a729\" returns successfully" Nov 8 00:06:07.299588 systemd-networkd[1937]: calib49ca544f36: Link UP Nov 8 00:06:07.312546 systemd-networkd[1937]: calib49ca544f36: Gained carrier Nov 8 00:06:07.369253 containerd[2017]: time="2025-11-08T00:06:07.369062403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:07.369891 containerd[2017]: time="2025-11-08T00:06:07.369398283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:07.369891 containerd[2017]: time="2025-11-08T00:06:07.369504027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:07.371627 containerd[2017]: time="2025-11-08T00:06:07.370847943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.369 [INFO][5414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0 csi-node-driver- calico-system 105dae3d-b44c-41c4-b31a-bd1432c68a75 1012 0 2025-11-08 00:05:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-1 csi-node-driver-rkzrr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib49ca544f36 [] [] }} ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.370 [INFO][5414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.798 [INFO][5486] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" HandleID="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.800 [INFO][5486] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" HandleID="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000352850), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-1", "pod":"csi-node-driver-rkzrr", "timestamp":"2025-11-08 00:06:06.798637792 +0000 UTC"}, Hostname:"ip-172-31-26-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.803 [INFO][5486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.902 [INFO][5486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.903 [INFO][5486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-1' Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:06.968 [INFO][5486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.023 [INFO][5486] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.062 [INFO][5486] ipam/ipam.go 511: Trying affinity for 192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.079 [INFO][5486] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.126 [INFO][5486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.192/26 host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.126 [INFO][5486] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.192/26 handle="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.165 [INFO][5486] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6 Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.208 [INFO][5486] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.192/26 handle="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.238 [INFO][5486] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.200/26] block=192.168.104.192/26 handle="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.238 [INFO][5486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.200/26] handle="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" host="ip-172-31-26-1" Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.238 [INFO][5486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:07.389925 containerd[2017]: 2025-11-08 00:06:07.238 [INFO][5486] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.200/26] IPv6=[] ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" HandleID="k8s-pod-network.aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:07.393239 containerd[2017]: 2025-11-08 00:06:07.265 [INFO][5414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"105dae3d-b44c-41c4-b31a-bd1432c68a75", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"", Pod:"csi-node-driver-rkzrr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib49ca544f36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:07.393239 containerd[2017]: 2025-11-08 00:06:07.265 [INFO][5414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.200/32] ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:07.393239 containerd[2017]: 2025-11-08 00:06:07.265 [INFO][5414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib49ca544f36 ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:07.393239 containerd[2017]: 2025-11-08 00:06:07.326 [INFO][5414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:07.393239 containerd[2017]: 2025-11-08 00:06:07.328 [INFO][5414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"105dae3d-b44c-41c4-b31a-bd1432c68a75", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6", Pod:"csi-node-driver-rkzrr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib49ca544f36", MAC:"de:f4:26:4b:d5:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:07.393239 containerd[2017]: 2025-11-08 00:06:07.372 [INFO][5414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6" Namespace="calico-system" Pod="csi-node-driver-rkzrr" WorkloadEndpoint="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:07.424040 systemd[1]: Started cri-containerd-818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7.scope - libcontainer container 818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7. Nov 8 00:06:07.491367 kubelet[3480]: E1108 00:06:07.489999 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:06:07.500880 containerd[2017]: time="2025-11-08T00:06:07.500344360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:06:07.500880 containerd[2017]: time="2025-11-08T00:06:07.500479252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:06:07.501827 containerd[2017]: time="2025-11-08T00:06:07.500542192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:07.501827 containerd[2017]: time="2025-11-08T00:06:07.500795704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:06:07.529328 systemd-networkd[1937]: cali9f7c1863073: Gained IPv6LL Nov 8 00:06:07.566872 systemd[1]: Started cri-containerd-aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6.scope - libcontainer container aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6. Nov 8 00:06:07.571735 kubelet[3480]: I1108 00:06:07.571427 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kvcmr" podStartSLOduration=66.571405384 podStartE2EDuration="1m6.571405384s" podCreationTimestamp="2025-11-08 00:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:06:07.51586306 +0000 UTC m=+71.939553070" watchObservedRunningTime="2025-11-08 00:06:07.571405384 +0000 UTC m=+71.995095394" Nov 8 00:06:07.612432 containerd[2017]: time="2025-11-08T00:06:07.612176645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:07.615231 containerd[2017]: time="2025-11-08T00:06:07.614914277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:07.615231 containerd[2017]: time="2025-11-08T00:06:07.614931461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:07.616134 kubelet[3480]: E1108 00:06:07.615736 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:07.616134 kubelet[3480]: E1108 00:06:07.615822 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:07.616280 kubelet[3480]: E1108 00:06:07.616134 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf595bbd4-gjrhj_calico-system(33ee737d-9bb0-44ae-abd4-ed2fcc115154): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:07.616280 kubelet[3480]: E1108 00:06:07.616195 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:06:07.753348 containerd[2017]: time="2025-11-08T00:06:07.753275837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-cbxnw,Uid:ec16be48-232b-457b-bf3a-4db776262475,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3\"" Nov 8 00:06:07.756283 containerd[2017]: time="2025-11-08T00:06:07.755497745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rdhgb,Uid:415c772f-4a8a-4df0-8713-cab5820f0205,Namespace:calico-system,Attempt:1,} returns sandbox id \"a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4\"" Nov 8 00:06:07.764271 containerd[2017]: time="2025-11-08T00:06:07.763740617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:07.783862 containerd[2017]: time="2025-11-08T00:06:07.783704417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkzrr,Uid:105dae3d-b44c-41c4-b31a-bd1432c68a75,Namespace:calico-system,Attempt:1,} returns sandbox id \"aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6\"" Nov 8 00:06:07.878041 containerd[2017]: time="2025-11-08T00:06:07.877813182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f558bfb5c-2hjdp,Uid:a0f0a15a-d068-42d7-9057-db8aa3861ce8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7\"" Nov 8 00:06:07.977420 systemd-networkd[1937]: cali1ec342cd486: Gained IPv6LL Nov 8 00:06:07.978008 systemd-networkd[1937]: cali1cb2565c823: Gained IPv6LL Nov 8 00:06:08.079186 containerd[2017]: time="2025-11-08T00:06:08.079095543Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:08.081532 containerd[2017]: time="2025-11-08T00:06:08.081460299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:08.081676 containerd[2017]: time="2025-11-08T00:06:08.081611883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:08.082324 kubelet[3480]: E1108 00:06:08.082062 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:08.082324 kubelet[3480]: E1108 00:06:08.082126 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:08.083873 kubelet[3480]: E1108 00:06:08.082356 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-cbxnw_calico-apiserver(ec16be48-232b-457b-bf3a-4db776262475): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:08.083873 kubelet[3480]: E1108 00:06:08.082416 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:06:08.085812 containerd[2017]: time="2025-11-08T00:06:08.082700955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:08.105397 systemd-networkd[1937]: cali65b4dc9727c: Gained IPv6LL Nov 8 00:06:08.389656 containerd[2017]: time="2025-11-08T00:06:08.389320228Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:08.392762 containerd[2017]: time="2025-11-08T00:06:08.392549248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:08.392762 containerd[2017]: time="2025-11-08T00:06:08.392705548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:08.393146 kubelet[3480]: E1108 00:06:08.393066 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:08.393282 kubelet[3480]: E1108 00:06:08.393171 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:08.394073 kubelet[3480]: E1108 00:06:08.393763 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rdhgb_calico-system(415c772f-4a8a-4df0-8713-cab5820f0205): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:08.394073 kubelet[3480]: E1108 00:06:08.393953 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:06:08.394281 containerd[2017]: time="2025-11-08T00:06:08.393829264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:06:08.492538 kubelet[3480]: E1108 00:06:08.491426 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:06:08.507124 kubelet[3480]: E1108 00:06:08.507072 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:06:08.507654 kubelet[3480]: E1108 00:06:08.507591 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:06:08.617288 systemd-networkd[1937]: cali3106f1938e2: Gained IPv6LL Nov 8 00:06:08.707444 containerd[2017]: time="2025-11-08T00:06:08.706676190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:08.711224 containerd[2017]: time="2025-11-08T00:06:08.709949994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:06:08.711224 containerd[2017]: time="2025-11-08T00:06:08.710121666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:06:08.711224 containerd[2017]: time="2025-11-08T00:06:08.710882490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:08.711756 kubelet[3480]: E1108 00:06:08.710367 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:08.711756 kubelet[3480]: E1108 00:06:08.710433 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:08.711756 kubelet[3480]: E1108 00:06:08.710714 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:09.000154 containerd[2017]: time="2025-11-08T00:06:08.999652591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:09.002326 containerd[2017]: time="2025-11-08T00:06:09.002149683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:09.002665 containerd[2017]: time="2025-11-08T00:06:09.002279895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:09.002861 kubelet[3480]: E1108 00:06:09.002811 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:09.002969 kubelet[3480]: E1108 00:06:09.002870 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:09.004081 containerd[2017]: time="2025-11-08T00:06:09.003441375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:06:09.004243 kubelet[3480]: E1108 00:06:09.003844 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-2hjdp_calico-apiserver(a0f0a15a-d068-42d7-9057-db8aa3861ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:09.004243 kubelet[3480]: E1108 00:06:09.003992 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:06:09.321645 systemd-networkd[1937]: calib49ca544f36: Gained IPv6LL Nov 8 00:06:09.421299 containerd[2017]: time="2025-11-08T00:06:09.421072314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:09.425064 containerd[2017]: time="2025-11-08T00:06:09.423576126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:06:09.425064 containerd[2017]: time="2025-11-08T00:06:09.423692742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:06:09.425280 kubelet[3480]: E1108 00:06:09.423918 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:09.425280 kubelet[3480]: E1108 00:06:09.423976 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:09.425280 kubelet[3480]: E1108 00:06:09.424393 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:09.425898 kubelet[3480]: E1108 00:06:09.424585 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:06:09.511684 kubelet[3480]: E1108 00:06:09.511524 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:06:09.515567 kubelet[3480]: E1108 00:06:09.513093 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:06:09.515567 kubelet[3480]: E1108 00:06:09.515184 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:06:09.515839 kubelet[3480]: E1108 00:06:09.515500 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:06:10.402977 systemd[1]: Started sshd@7-172.31.26.1:22-139.178.89.65:48712.service - OpenSSH per-connection server daemon (139.178.89.65:48712). Nov 8 00:06:10.607068 sshd[5788]: Accepted publickey for core from 139.178.89.65 port 48712 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:10.610430 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:10.620215 systemd-logind[1992]: New session 8 of user core. Nov 8 00:06:10.625287 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:06:10.970708 sshd[5788]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:10.978398 systemd[1]: sshd@7-172.31.26.1:22-139.178.89.65:48712.service: Deactivated successfully. Nov 8 00:06:10.982831 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:06:10.986135 systemd-logind[1992]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:06:10.987896 systemd-logind[1992]: Removed session 8. Nov 8 00:06:11.663087 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.104.192:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 7 vxlan.calico 192.168.104.192:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 8 vxlan.calico [fe80::64e2:84ff:fe4e:94bf%4]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 9 calia01eb682dd0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 10 calic399dcf364c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 11 cali1cb2565c823 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 12 cali9f7c1863073 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 13 cali1ec342cd486 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 14 cali3106f1938e2 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 15 cali65b4dc9727c [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:06:11.664376 ntpd[1987]: 8 Nov 00:06:11 ntpd[1987]: Listen normally on 16 calib49ca544f36 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:06:11.663267 ntpd[1987]: Listen normally on 8 vxlan.calico [fe80::64e2:84ff:fe4e:94bf%4]:123 Nov 8 00:06:11.663349 ntpd[1987]: Listen normally on 9 calia01eb682dd0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:06:11.663418 ntpd[1987]: Listen normally on 10 calic399dcf364c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:06:11.663487 ntpd[1987]: Listen normally on 11 cali1cb2565c823 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:06:11.663554 ntpd[1987]: Listen normally on 12 cali9f7c1863073 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:06:11.663620 ntpd[1987]: Listen normally on 13 cali1ec342cd486 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:06:11.663689 ntpd[1987]: Listen normally on 14 cali3106f1938e2 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:06:11.663754 ntpd[1987]: Listen normally on 15 cali65b4dc9727c [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:06:11.663824 ntpd[1987]: Listen normally on 16 calib49ca544f36 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:06:16.014574 systemd[1]: Started sshd@8-172.31.26.1:22-139.178.89.65:48728.service - OpenSSH per-connection server daemon (139.178.89.65:48728). Nov 8 00:06:16.197416 sshd[5812]: Accepted publickey for core from 139.178.89.65 port 48728 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:16.200287 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:16.209141 systemd-logind[1992]: New session 9 of user core. Nov 8 00:06:16.216322 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:06:16.479784 sshd[5812]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:16.487166 systemd-logind[1992]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:06:16.488610 systemd[1]: sshd@8-172.31.26.1:22-139.178.89.65:48728.service: Deactivated successfully. Nov 8 00:06:16.492931 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:06:16.496442 systemd-logind[1992]: Removed session 9. Nov 8 00:06:19.863968 containerd[2017]: time="2025-11-08T00:06:19.863903693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:06:20.147797 containerd[2017]: time="2025-11-08T00:06:20.147306855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:20.149707 containerd[2017]: time="2025-11-08T00:06:20.149624619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:06:20.150118 containerd[2017]: time="2025-11-08T00:06:20.149655867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:06:20.150250 kubelet[3480]: E1108 00:06:20.150166 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:20.150786 kubelet[3480]: E1108 00:06:20.150263 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:20.152053 kubelet[3480]: E1108 00:06:20.151937 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:20.154270 containerd[2017]: time="2025-11-08T00:06:20.154217991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:06:20.452362 containerd[2017]: time="2025-11-08T00:06:20.451697236Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:20.454457 containerd[2017]: time="2025-11-08T00:06:20.454298428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:06:20.454457 containerd[2017]: time="2025-11-08T00:06:20.454416712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:20.455535 kubelet[3480]: E1108 00:06:20.455211 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:20.455959 kubelet[3480]: E1108 00:06:20.455430 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:20.455959 kubelet[3480]: E1108 00:06:20.455878 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:20.457476 kubelet[3480]: E1108 00:06:20.456084 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:06:21.522584 systemd[1]: Started sshd@9-172.31.26.1:22-139.178.89.65:39392.service - OpenSSH per-connection server daemon (139.178.89.65:39392). Nov 8 00:06:21.704510 sshd[5834]: Accepted publickey for core from 139.178.89.65 port 39392 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:21.707166 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:21.716117 systemd-logind[1992]: New session 10 of user core. Nov 8 00:06:21.722284 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:06:21.978042 sshd[5834]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:21.984752 systemd[1]: sshd@9-172.31.26.1:22-139.178.89.65:39392.service: Deactivated successfully. Nov 8 00:06:21.988181 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:06:21.990743 systemd-logind[1992]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:06:21.992952 systemd-logind[1992]: Removed session 10. Nov 8 00:06:22.016575 systemd[1]: Started sshd@10-172.31.26.1:22-139.178.89.65:39398.service - OpenSSH per-connection server daemon (139.178.89.65:39398). Nov 8 00:06:22.204653 sshd[5848]: Accepted publickey for core from 139.178.89.65 port 39398 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:22.207455 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:22.216780 systemd-logind[1992]: New session 11 of user core. Nov 8 00:06:22.223290 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:06:22.587624 sshd[5848]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:22.599771 systemd[1]: sshd@10-172.31.26.1:22-139.178.89.65:39398.service: Deactivated successfully. Nov 8 00:06:22.608365 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:06:22.618340 systemd-logind[1992]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:06:22.641725 systemd[1]: Started sshd@11-172.31.26.1:22-139.178.89.65:39414.service - OpenSSH per-connection server daemon (139.178.89.65:39414). Nov 8 00:06:22.644404 systemd-logind[1992]: Removed session 11. Nov 8 00:06:22.828923 sshd[5859]: Accepted publickey for core from 139.178.89.65 port 39414 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:22.831707 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:22.840693 systemd-logind[1992]: New session 12 of user core. Nov 8 00:06:22.849333 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:06:23.095349 sshd[5859]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:23.109174 systemd[1]: sshd@11-172.31.26.1:22-139.178.89.65:39414.service: Deactivated successfully. Nov 8 00:06:23.115714 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:06:23.117692 systemd-logind[1992]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:06:23.119543 systemd-logind[1992]: Removed session 12. Nov 8 00:06:23.865446 containerd[2017]: time="2025-11-08T00:06:23.865254321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:24.171647 containerd[2017]: time="2025-11-08T00:06:24.171504499Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:24.174498 containerd[2017]: time="2025-11-08T00:06:24.174331507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:24.174498 containerd[2017]: time="2025-11-08T00:06:24.174440467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:24.175130 kubelet[3480]: E1108 00:06:24.174666 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:24.175130 kubelet[3480]: E1108 00:06:24.174721 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:24.175130 kubelet[3480]: E1108 00:06:24.174912 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-2hjdp_calico-apiserver(a0f0a15a-d068-42d7-9057-db8aa3861ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:24.175816 containerd[2017]: time="2025-11-08T00:06:24.175196155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:24.177181 kubelet[3480]: E1108 00:06:24.174988 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:06:24.491648 containerd[2017]: time="2025-11-08T00:06:24.491352116Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:24.493674 containerd[2017]: time="2025-11-08T00:06:24.493528820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:24.493674 containerd[2017]: time="2025-11-08T00:06:24.493611380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:24.493990 kubelet[3480]: E1108 00:06:24.493853 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:24.493990 kubelet[3480]: E1108 00:06:24.493916 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:24.495171 kubelet[3480]: E1108 00:06:24.494431 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rdhgb_calico-system(415c772f-4a8a-4df0-8713-cab5820f0205): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:24.495171 kubelet[3480]: E1108 00:06:24.494490 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:06:24.495354 containerd[2017]: time="2025-11-08T00:06:24.494438732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:06:24.802794 containerd[2017]: time="2025-11-08T00:06:24.802713202Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:24.804933 containerd[2017]: time="2025-11-08T00:06:24.804857446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:06:24.805090 containerd[2017]: time="2025-11-08T00:06:24.804998422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:06:24.806099 kubelet[3480]: E1108 00:06:24.805330 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:24.806099 kubelet[3480]: E1108 00:06:24.805391 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:24.806099 kubelet[3480]: E1108 00:06:24.805596 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:24.806837 containerd[2017]: time="2025-11-08T00:06:24.806714782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:25.143393 containerd[2017]: time="2025-11-08T00:06:25.143048084Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:25.146579 containerd[2017]: time="2025-11-08T00:06:25.146292536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:25.146579 containerd[2017]: time="2025-11-08T00:06:25.146363216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:25.146984 kubelet[3480]: E1108 00:06:25.146729 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:25.146984 kubelet[3480]: E1108 00:06:25.146788 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:25.147248 kubelet[3480]: E1108 00:06:25.147103 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf595bbd4-gjrhj_calico-system(33ee737d-9bb0-44ae-abd4-ed2fcc115154): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:25.147248 kubelet[3480]: E1108 00:06:25.147180 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:06:25.148378 containerd[2017]: time="2025-11-08T00:06:25.148313048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:25.450809 containerd[2017]: time="2025-11-08T00:06:25.450411717Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:25.452876 containerd[2017]: time="2025-11-08T00:06:25.452685753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:25.452876 containerd[2017]: time="2025-11-08T00:06:25.452828733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:25.454146 kubelet[3480]: E1108 00:06:25.453321 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:25.454146 kubelet[3480]: E1108 00:06:25.453387 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:25.454146 kubelet[3480]: E1108 00:06:25.453630 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-cbxnw_calico-apiserver(ec16be48-232b-457b-bf3a-4db776262475): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:25.454146 kubelet[3480]: E1108 00:06:25.453701 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:06:25.455820 containerd[2017]: time="2025-11-08T00:06:25.454691421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:06:25.765218 containerd[2017]: time="2025-11-08T00:06:25.764996555Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:25.767371 containerd[2017]: time="2025-11-08T00:06:25.767299463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:06:25.767504 containerd[2017]: time="2025-11-08T00:06:25.767448575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:06:25.767723 kubelet[3480]: E1108 00:06:25.767669 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:25.767824 kubelet[3480]: E1108 00:06:25.767737 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:25.767883 kubelet[3480]: E1108 00:06:25.767845 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:25.767994 kubelet[3480]: E1108 00:06:25.767908 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:06:28.138683 systemd[1]: Started sshd@12-172.31.26.1:22-139.178.89.65:45304.service - OpenSSH per-connection server daemon (139.178.89.65:45304). Nov 8 00:06:28.327668 sshd[5880]: Accepted publickey for core from 139.178.89.65 port 45304 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:28.330363 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:28.339123 systemd-logind[1992]: New session 13 of user core. Nov 8 00:06:28.347260 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:06:28.622059 sshd[5880]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:28.632643 systemd[1]: sshd@12-172.31.26.1:22-139.178.89.65:45304.service: Deactivated successfully. Nov 8 00:06:28.641839 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:06:28.644683 systemd-logind[1992]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:06:28.648412 systemd-logind[1992]: Removed session 13. Nov 8 00:06:33.662591 systemd[1]: Started sshd@13-172.31.26.1:22-139.178.89.65:45314.service - OpenSSH per-connection server daemon (139.178.89.65:45314). Nov 8 00:06:33.847589 sshd[5920]: Accepted publickey for core from 139.178.89.65 port 45314 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:33.850350 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:33.858512 systemd-logind[1992]: New session 14 of user core. Nov 8 00:06:33.872292 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:06:34.119253 sshd[5920]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:34.126167 systemd-logind[1992]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:06:34.127852 systemd[1]: sshd@13-172.31.26.1:22-139.178.89.65:45314.service: Deactivated successfully. Nov 8 00:06:34.131416 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:06:34.134601 systemd-logind[1992]: Removed session 14. Nov 8 00:06:34.863043 kubelet[3480]: E1108 00:06:34.862639 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:06:35.861786 kubelet[3480]: E1108 00:06:35.860926 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:06:36.863180 kubelet[3480]: E1108 00:06:36.861288 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:06:36.863180 kubelet[3480]: E1108 00:06:36.861499 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:06:38.861843 kubelet[3480]: E1108 00:06:38.861737 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:06:39.162537 systemd[1]: Started sshd@14-172.31.26.1:22-139.178.89.65:43298.service - OpenSSH per-connection server daemon (139.178.89.65:43298). Nov 8 00:06:39.360279 sshd[5933]: Accepted publickey for core from 139.178.89.65 port 43298 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:39.463106 sshd[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:39.482858 systemd-logind[1992]: New session 15 of user core. Nov 8 00:06:39.490366 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:06:39.791680 sshd[5933]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:39.803424 systemd[1]: sshd@14-172.31.26.1:22-139.178.89.65:43298.service: Deactivated successfully. Nov 8 00:06:39.810764 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:06:39.817545 systemd-logind[1992]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:06:39.820068 systemd-logind[1992]: Removed session 15. Nov 8 00:06:39.865988 kubelet[3480]: E1108 00:06:39.865773 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:06:44.833566 systemd[1]: Started sshd@15-172.31.26.1:22-139.178.89.65:43306.service - OpenSSH per-connection server daemon (139.178.89.65:43306). Nov 8 00:06:45.008892 sshd[5952]: Accepted publickey for core from 139.178.89.65 port 43306 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:45.011554 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:45.019256 systemd-logind[1992]: New session 16 of user core. Nov 8 00:06:45.029355 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:06:45.278257 sshd[5952]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:45.285280 systemd[1]: sshd@15-172.31.26.1:22-139.178.89.65:43306.service: Deactivated successfully. Nov 8 00:06:45.289594 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:06:45.291498 systemd-logind[1992]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:06:45.294975 systemd-logind[1992]: Removed session 16. Nov 8 00:06:45.318543 systemd[1]: Started sshd@16-172.31.26.1:22-139.178.89.65:43318.service - OpenSSH per-connection server daemon (139.178.89.65:43318). Nov 8 00:06:45.505847 sshd[5964]: Accepted publickey for core from 139.178.89.65 port 43318 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:45.509461 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:45.517300 systemd-logind[1992]: New session 17 of user core. Nov 8 00:06:45.533308 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:06:49.463729 sshd[5964]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:49.470544 systemd[1]: sshd@16-172.31.26.1:22-139.178.89.65:43318.service: Deactivated successfully. Nov 8 00:06:49.476245 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:06:49.478200 systemd-logind[1992]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:06:49.481009 systemd-logind[1992]: Removed session 17. Nov 8 00:06:49.506546 systemd[1]: Started sshd@17-172.31.26.1:22-139.178.89.65:47074.service - OpenSSH per-connection server daemon (139.178.89.65:47074). Nov 8 00:06:49.680728 sshd[5977]: Accepted publickey for core from 139.178.89.65 port 47074 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:49.683463 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:49.692178 systemd-logind[1992]: New session 18 of user core. Nov 8 00:06:49.704319 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:06:49.866457 containerd[2017]: time="2025-11-08T00:06:49.866293834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:06:50.175436 containerd[2017]: time="2025-11-08T00:06:50.175188236Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:50.177563 containerd[2017]: time="2025-11-08T00:06:50.177480812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:06:50.177688 containerd[2017]: time="2025-11-08T00:06:50.177624164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:06:50.177891 kubelet[3480]: E1108 00:06:50.177825 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:50.180187 kubelet[3480]: E1108 00:06:50.177884 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:06:50.180187 kubelet[3480]: E1108 00:06:50.178207 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:50.180332 containerd[2017]: time="2025-11-08T00:06:50.178383116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:50.490996 containerd[2017]: time="2025-11-08T00:06:50.490815610Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:50.494186 containerd[2017]: time="2025-11-08T00:06:50.494003038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:50.494186 containerd[2017]: time="2025-11-08T00:06:50.494121334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:50.494472 kubelet[3480]: E1108 00:06:50.494422 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:50.494542 kubelet[3480]: E1108 00:06:50.494480 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:50.494769 kubelet[3480]: E1108 00:06:50.494706 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-2hjdp_calico-apiserver(a0f0a15a-d068-42d7-9057-db8aa3861ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:50.494853 kubelet[3480]: E1108 00:06:50.494777 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:06:50.495971 containerd[2017]: time="2025-11-08T00:06:50.495606262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:06:50.803411 containerd[2017]: time="2025-11-08T00:06:50.803338859Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:50.805804 containerd[2017]: time="2025-11-08T00:06:50.805658771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:06:50.805804 containerd[2017]: time="2025-11-08T00:06:50.805758659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:50.807313 kubelet[3480]: E1108 00:06:50.806095 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:50.807313 kubelet[3480]: E1108 00:06:50.807335 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:06:50.807562 kubelet[3480]: E1108 00:06:50.807470 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:50.807658 kubelet[3480]: E1108 00:06:50.807538 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:06:50.867487 containerd[2017]: time="2025-11-08T00:06:50.867415019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:06:51.175500 containerd[2017]: time="2025-11-08T00:06:51.175202625Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:51.177635 containerd[2017]: time="2025-11-08T00:06:51.177552837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:06:51.177970 containerd[2017]: time="2025-11-08T00:06:51.177618861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:06:51.178200 kubelet[3480]: E1108 00:06:51.178126 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:51.178775 kubelet[3480]: E1108 00:06:51.178196 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:06:51.178775 kubelet[3480]: E1108 00:06:51.178434 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf595bbd4-gjrhj_calico-system(33ee737d-9bb0-44ae-abd4-ed2fcc115154): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:51.178775 kubelet[3480]: E1108 00:06:51.178488 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:06:51.180498 containerd[2017]: time="2025-11-08T00:06:51.180193953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:06:51.501088 containerd[2017]: time="2025-11-08T00:06:51.500874179Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:51.503315 containerd[2017]: time="2025-11-08T00:06:51.503197223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:06:51.503671 containerd[2017]: time="2025-11-08T00:06:51.503256815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:51.504536 kubelet[3480]: E1108 00:06:51.504161 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:51.504536 kubelet[3480]: E1108 00:06:51.504254 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:06:51.504959 kubelet[3480]: E1108 00:06:51.504785 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rdhgb_calico-system(415c772f-4a8a-4df0-8713-cab5820f0205): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:51.504959 kubelet[3480]: E1108 00:06:51.504876 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:06:51.536450 sshd[5977]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:51.547184 systemd[1]: sshd@17-172.31.26.1:22-139.178.89.65:47074.service: Deactivated successfully. Nov 8 00:06:51.556246 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:06:51.564396 systemd-logind[1992]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:06:51.588560 systemd-logind[1992]: Removed session 18. Nov 8 00:06:51.600531 systemd[1]: Started sshd@18-172.31.26.1:22-139.178.89.65:47086.service - OpenSSH per-connection server daemon (139.178.89.65:47086). Nov 8 00:06:51.781074 sshd[5994]: Accepted publickey for core from 139.178.89.65 port 47086 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:51.783475 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:51.791317 systemd-logind[1992]: New session 19 of user core. Nov 8 00:06:51.802282 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:06:51.870273 containerd[2017]: time="2025-11-08T00:06:51.870162048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:06:52.238516 containerd[2017]: time="2025-11-08T00:06:52.238305718Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:52.240708 containerd[2017]: time="2025-11-08T00:06:52.240446086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:06:52.240708 containerd[2017]: time="2025-11-08T00:06:52.240657382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:06:52.242044 kubelet[3480]: E1108 00:06:52.241081 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:52.242044 kubelet[3480]: E1108 00:06:52.241141 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:06:52.243570 kubelet[3480]: E1108 00:06:52.242682 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-cbxnw_calico-apiserver(ec16be48-232b-457b-bf3a-4db776262475): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:52.243570 kubelet[3480]: E1108 00:06:52.242761 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:06:52.243762 containerd[2017]: time="2025-11-08T00:06:52.242500606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:06:52.344889 sshd[5994]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:52.360126 systemd-logind[1992]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:06:52.361674 systemd[1]: sshd@18-172.31.26.1:22-139.178.89.65:47086.service: Deactivated successfully. Nov 8 00:06:52.368702 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:06:52.387479 systemd-logind[1992]: Removed session 19. Nov 8 00:06:52.392530 systemd[1]: Started sshd@19-172.31.26.1:22-139.178.89.65:47094.service - OpenSSH per-connection server daemon (139.178.89.65:47094). Nov 8 00:06:52.544179 containerd[2017]: time="2025-11-08T00:06:52.543979776Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:52.547092 containerd[2017]: time="2025-11-08T00:06:52.546864492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:06:52.547092 containerd[2017]: time="2025-11-08T00:06:52.547003416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:06:52.547508 kubelet[3480]: E1108 00:06:52.547260 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:52.547508 kubelet[3480]: E1108 00:06:52.547318 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:06:52.547508 kubelet[3480]: E1108 00:06:52.547452 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:52.550646 containerd[2017]: time="2025-11-08T00:06:52.550282404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:06:52.588987 sshd[6005]: Accepted publickey for core from 139.178.89.65 port 47094 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:52.590929 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:52.601197 systemd-logind[1992]: New session 20 of user core. Nov 8 00:06:52.606300 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:06:52.834305 containerd[2017]: time="2025-11-08T00:06:52.833980777Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:06:52.836792 containerd[2017]: time="2025-11-08T00:06:52.836573149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:06:52.836792 containerd[2017]: time="2025-11-08T00:06:52.836718313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:06:52.837069 kubelet[3480]: E1108 00:06:52.836940 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:52.837069 kubelet[3480]: E1108 00:06:52.837003 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:06:52.837224 kubelet[3480]: E1108 00:06:52.837187 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:06:52.837348 kubelet[3480]: E1108 00:06:52.837285 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:06:52.859272 sshd[6005]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:52.867476 systemd[1]: sshd@19-172.31.26.1:22-139.178.89.65:47094.service: Deactivated successfully. Nov 8 00:06:52.873209 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:06:52.874801 systemd-logind[1992]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:06:52.878536 systemd-logind[1992]: Removed session 20. Nov 8 00:06:55.835393 containerd[2017]: time="2025-11-08T00:06:55.835313800Z" level=info msg="StopPodSandbox for \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\"" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.906 [WARNING][6026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec16be48-232b-457b-bf3a-4db776262475", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3", Pod:"calico-apiserver-f558bfb5c-cbxnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ec342cd486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.906 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.906 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" iface="eth0" netns="" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.906 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.906 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.942 [INFO][6036] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.943 [INFO][6036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.943 [INFO][6036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.962 [WARNING][6036] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.962 [INFO][6036] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.965 [INFO][6036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:55.973961 containerd[2017]: 2025-11-08 00:06:55.968 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:55.973961 containerd[2017]: time="2025-11-08T00:06:55.973724729Z" level=info msg="TearDown network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\" successfully" Nov 8 00:06:55.973961 containerd[2017]: time="2025-11-08T00:06:55.973761281Z" level=info msg="StopPodSandbox for \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\" returns successfully" Nov 8 00:06:55.974871 containerd[2017]: time="2025-11-08T00:06:55.974764481Z" level=info msg="RemovePodSandbox for \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\"" Nov 8 00:06:55.974871 containerd[2017]: time="2025-11-08T00:06:55.974818469Z" level=info msg="Forcibly stopping sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\"" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.054 [WARNING][6050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec16be48-232b-457b-bf3a-4db776262475", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"b4c309ba7cfcb20dd8f17d9ca0ff2c84cb2ffc03aaeb171fa46e130c664000e3", Pod:"calico-apiserver-f558bfb5c-cbxnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1ec342cd486", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.055 [INFO][6050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.055 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" iface="eth0" netns="" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.055 [INFO][6050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.055 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.091 [INFO][6057] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.091 [INFO][6057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.091 [INFO][6057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.105 [WARNING][6057] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.105 [INFO][6057] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" HandleID="k8s-pod-network.eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--cbxnw-eth0" Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.107 [INFO][6057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:56.114914 containerd[2017]: 2025-11-08 00:06:56.110 [INFO][6050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34" Nov 8 00:06:56.114914 containerd[2017]: time="2025-11-08T00:06:56.113574409Z" level=info msg="TearDown network for sandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\" successfully" Nov 8 00:06:56.120780 containerd[2017]: time="2025-11-08T00:06:56.120702409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:56.120942 containerd[2017]: time="2025-11-08T00:06:56.120811645Z" level=info msg="RemovePodSandbox \"eabd5d8b0a6ca232a335a07bac0f257e613e80813b638f804cae162aec5b2b34\" returns successfully" Nov 8 00:06:56.122446 containerd[2017]: time="2025-11-08T00:06:56.122183785Z" level=info msg="StopPodSandbox for \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\"" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.182 [WARNING][6071] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.183 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.183 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" iface="eth0" netns="" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.183 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.183 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.221 [INFO][6078] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.221 [INFO][6078] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.222 [INFO][6078] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.235 [WARNING][6078] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.235 [INFO][6078] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.238 [INFO][6078] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:56.243738 containerd[2017]: 2025-11-08 00:06:56.240 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.243738 containerd[2017]: time="2025-11-08T00:06:56.243531950Z" level=info msg="TearDown network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\" successfully" Nov 8 00:06:56.243738 containerd[2017]: time="2025-11-08T00:06:56.243600182Z" level=info msg="StopPodSandbox for \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\" returns successfully" Nov 8 00:06:56.245342 containerd[2017]: time="2025-11-08T00:06:56.244550990Z" level=info msg="RemovePodSandbox for \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\"" Nov 8 00:06:56.245342 containerd[2017]: time="2025-11-08T00:06:56.244603538Z" level=info msg="Forcibly stopping sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\"" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.305 [WARNING][6092] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" WorkloadEndpoint="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.306 [INFO][6092] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.306 [INFO][6092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" iface="eth0" netns="" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.306 [INFO][6092] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.306 [INFO][6092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.343 [INFO][6099] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.343 [INFO][6099] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.343 [INFO][6099] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.361 [WARNING][6099] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.361 [INFO][6099] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" HandleID="k8s-pod-network.6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Workload="ip--172--31--26--1-k8s-whisker--58c68b6689--zrwvp-eth0" Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.364 [INFO][6099] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:56.370388 containerd[2017]: 2025-11-08 00:06:56.366 [INFO][6092] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278" Nov 8 00:06:56.370388 containerd[2017]: time="2025-11-08T00:06:56.370192767Z" level=info msg="TearDown network for sandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\" successfully" Nov 8 00:06:56.377502 containerd[2017]: time="2025-11-08T00:06:56.377426475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:56.377920 containerd[2017]: time="2025-11-08T00:06:56.377520843Z" level=info msg="RemovePodSandbox \"6b51f45c3d5947e8ed965d7cd08b73cc0b77114d16ecb53f5385da46557ec278\" returns successfully" Nov 8 00:06:56.378452 containerd[2017]: time="2025-11-08T00:06:56.378414423Z" level=info msg="StopPodSandbox for \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\"" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.462 [WARNING][6113] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"415c772f-4a8a-4df0-8713-cab5820f0205", ResourceVersion:"1403", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4", Pod:"goldmane-7c778bb748-rdhgb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3106f1938e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.463 [INFO][6113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.463 [INFO][6113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" iface="eth0" netns="" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.463 [INFO][6113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.463 [INFO][6113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.498 [INFO][6121] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.498 [INFO][6121] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.498 [INFO][6121] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.513 [WARNING][6121] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.513 [INFO][6121] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.516 [INFO][6121] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:56.522556 containerd[2017]: 2025-11-08 00:06:56.519 [INFO][6113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.523786 containerd[2017]: time="2025-11-08T00:06:56.523448559Z" level=info msg="TearDown network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\" successfully" Nov 8 00:06:56.523786 containerd[2017]: time="2025-11-08T00:06:56.523492635Z" level=info msg="StopPodSandbox for \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\" returns successfully" Nov 8 00:06:56.524535 containerd[2017]: time="2025-11-08T00:06:56.524490675Z" level=info msg="RemovePodSandbox for \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\"" Nov 8 00:06:56.524621 containerd[2017]: time="2025-11-08T00:06:56.524568339Z" level=info msg="Forcibly stopping sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\"" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.585 [WARNING][6135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"415c772f-4a8a-4df0-8713-cab5820f0205", ResourceVersion:"1403", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"a3e7609ec9fac37cc1b1f2025881ad3718dfb9c46d80df98ff58fca6f854dcb4", Pod:"goldmane-7c778bb748-rdhgb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3106f1938e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.586 [INFO][6135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.586 [INFO][6135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" iface="eth0" netns="" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.586 [INFO][6135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.586 [INFO][6135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.622 [INFO][6142] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.623 [INFO][6142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.623 [INFO][6142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.636 [WARNING][6142] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.636 [INFO][6142] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" HandleID="k8s-pod-network.9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Workload="ip--172--31--26--1-k8s-goldmane--7c778bb748--rdhgb-eth0" Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.639 [INFO][6142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:56.645379 containerd[2017]: 2025-11-08 00:06:56.642 [INFO][6135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a" Nov 8 00:06:56.645379 containerd[2017]: time="2025-11-08T00:06:56.645070492Z" level=info msg="TearDown network for sandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\" successfully" Nov 8 00:06:56.664606 containerd[2017]: time="2025-11-08T00:06:56.664487620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:56.664606 containerd[2017]: time="2025-11-08T00:06:56.664586908Z" level=info msg="RemovePodSandbox \"9bc1c236f8c2bc8c8b3c4e2867c337be7bc1fc70d5008e874e29cf3fdc13424a\" returns successfully" Nov 8 00:06:56.665430 containerd[2017]: time="2025-11-08T00:06:56.665135116Z" level=info msg="StopPodSandbox for \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\"" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.774 [WARNING][6156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"105dae3d-b44c-41c4-b31a-bd1432c68a75", ResourceVersion:"1430", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6", Pod:"csi-node-driver-rkzrr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib49ca544f36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.775 [INFO][6156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.775 [INFO][6156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" iface="eth0" netns="" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.775 [INFO][6156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.775 [INFO][6156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.816 [INFO][6163] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.816 [INFO][6163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.816 [INFO][6163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.829 [WARNING][6163] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.829 [INFO][6163] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.832 [INFO][6163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:56.837614 containerd[2017]: 2025-11-08 00:06:56.834 [INFO][6156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.837614 containerd[2017]: time="2025-11-08T00:06:56.837374201Z" level=info msg="TearDown network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\" successfully" Nov 8 00:06:56.837614 containerd[2017]: time="2025-11-08T00:06:56.837430805Z" level=info msg="StopPodSandbox for \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\" returns successfully" Nov 8 00:06:56.839967 containerd[2017]: time="2025-11-08T00:06:56.839462969Z" level=info msg="RemovePodSandbox for \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\"" Nov 8 00:06:56.839967 containerd[2017]: time="2025-11-08T00:06:56.839516645Z" level=info msg="Forcibly stopping sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\"" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.905 [WARNING][6177] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"105dae3d-b44c-41c4-b31a-bd1432c68a75", ResourceVersion:"1430", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"aa45038fcc6865d09c836eff90a8340cde00f91d33bed90248a9d482df6399e6", Pod:"csi-node-driver-rkzrr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib49ca544f36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.906 [INFO][6177] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.906 [INFO][6177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" iface="eth0" netns="" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.906 [INFO][6177] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.906 [INFO][6177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.945 [INFO][6185] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.946 [INFO][6185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.946 [INFO][6185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.959 [WARNING][6185] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.959 [INFO][6185] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" HandleID="k8s-pod-network.2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Workload="ip--172--31--26--1-k8s-csi--node--driver--rkzrr-eth0" Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.962 [INFO][6185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:56.967782 containerd[2017]: 2025-11-08 00:06:56.965 [INFO][6177] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac" Nov 8 00:06:56.967782 containerd[2017]: time="2025-11-08T00:06:56.967639446Z" level=info msg="TearDown network for sandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\" successfully" Nov 8 00:06:56.974583 containerd[2017]: time="2025-11-08T00:06:56.974478474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:56.974583 containerd[2017]: time="2025-11-08T00:06:56.974575614Z" level=info msg="RemovePodSandbox \"2c959b2625c7e2b3e74f9b4c5d0d12093f47e38c4fb646a45a624ef34d92a3ac\" returns successfully" Nov 8 00:06:56.975804 containerd[2017]: time="2025-11-08T00:06:56.975382302Z" level=info msg="StopPodSandbox for \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\"" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.041 [WARNING][6199] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0f0a15a-d068-42d7-9057-db8aa3861ce8", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7", Pod:"calico-apiserver-f558bfb5c-2hjdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b4dc9727c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.042 [INFO][6199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.042 [INFO][6199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" iface="eth0" netns="" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.042 [INFO][6199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.042 [INFO][6199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.082 [INFO][6207] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.082 [INFO][6207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.082 [INFO][6207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.096 [WARNING][6207] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.096 [INFO][6207] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.099 [INFO][6207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:57.104683 containerd[2017]: 2025-11-08 00:06:57.101 [INFO][6199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.106681 containerd[2017]: time="2025-11-08T00:06:57.104744366Z" level=info msg="TearDown network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\" successfully" Nov 8 00:06:57.106681 containerd[2017]: time="2025-11-08T00:06:57.104781902Z" level=info msg="StopPodSandbox for \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\" returns successfully" Nov 8 00:06:57.106681 containerd[2017]: time="2025-11-08T00:06:57.105765014Z" level=info msg="RemovePodSandbox for \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\"" Nov 8 00:06:57.106681 containerd[2017]: time="2025-11-08T00:06:57.105816110Z" level=info msg="Forcibly stopping sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\"" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.169 [WARNING][6221] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0", GenerateName:"calico-apiserver-f558bfb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0f0a15a-d068-42d7-9057-db8aa3861ce8", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f558bfb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"818444c910fca1a5dfd510a6d9a593969757728c4f6057cf01c42c11445835c7", Pod:"calico-apiserver-f558bfb5c-2hjdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65b4dc9727c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.169 [INFO][6221] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.169 [INFO][6221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" iface="eth0" netns="" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.169 [INFO][6221] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.169 [INFO][6221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.208 [INFO][6228] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.208 [INFO][6228] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.208 [INFO][6228] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.223 [WARNING][6228] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.223 [INFO][6228] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" HandleID="k8s-pod-network.ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Workload="ip--172--31--26--1-k8s-calico--apiserver--f558bfb5c--2hjdp-eth0" Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.225 [INFO][6228] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:57.230972 containerd[2017]: 2025-11-08 00:06:57.228 [INFO][6221] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36" Nov 8 00:06:57.231825 containerd[2017]: time="2025-11-08T00:06:57.231206811Z" level=info msg="TearDown network for sandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\" successfully" Nov 8 00:06:57.240302 containerd[2017]: time="2025-11-08T00:06:57.240206535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:57.240557 containerd[2017]: time="2025-11-08T00:06:57.240303423Z" level=info msg="RemovePodSandbox \"ef1548297cd838805af2b097be94072f66b814e1194840c9e942ff8ee0a31d36\" returns successfully" Nov 8 00:06:57.241515 containerd[2017]: time="2025-11-08T00:06:57.241160403Z" level=info msg="StopPodSandbox for \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\"" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.304 [WARNING][6242] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052", Pod:"coredns-66bc5c9577-kvcmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb2565c823", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.305 [INFO][6242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.305 [INFO][6242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" iface="eth0" netns="" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.305 [INFO][6242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.305 [INFO][6242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.348 [INFO][6249] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.348 [INFO][6249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.348 [INFO][6249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.362 [WARNING][6249] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.362 [INFO][6249] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.369 [INFO][6249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:57.377138 containerd[2017]: 2025-11-08 00:06:57.372 [INFO][6242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.378251 containerd[2017]: time="2025-11-08T00:06:57.378174400Z" level=info msg="TearDown network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\" successfully" Nov 8 00:06:57.378251 containerd[2017]: time="2025-11-08T00:06:57.378229204Z" level=info msg="StopPodSandbox for \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\" returns successfully" Nov 8 00:06:57.380063 containerd[2017]: time="2025-11-08T00:06:57.379153648Z" level=info msg="RemovePodSandbox for \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\"" Nov 8 00:06:57.380063 containerd[2017]: time="2025-11-08T00:06:57.379202644Z" level=info msg="Forcibly stopping sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\"" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.482 [WARNING][6263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6ec260d-5b9d-4d44-82ff-ca1893bb4d69", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"50449aadf6a3a544169f0c4151945040ee68c0ffc96d8651155559102111f052", Pod:"coredns-66bc5c9577-kvcmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb2565c823", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.483 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.483 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" iface="eth0" netns="" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.483 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.483 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.550 [INFO][6270] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.550 [INFO][6270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.550 [INFO][6270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.572 [WARNING][6270] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.572 [INFO][6270] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" HandleID="k8s-pod-network.c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--kvcmr-eth0" Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.575 [INFO][6270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:57.582880 containerd[2017]: 2025-11-08 00:06:57.578 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304" Nov 8 00:06:57.584964 containerd[2017]: time="2025-11-08T00:06:57.583094501Z" level=info msg="TearDown network for sandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\" successfully" Nov 8 00:06:57.594153 containerd[2017]: time="2025-11-08T00:06:57.593258141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:57.594153 containerd[2017]: time="2025-11-08T00:06:57.593350385Z" level=info msg="RemovePodSandbox \"c8acedf9f6df83bbd56c422c7cd496bb571eb5541c78c33622286dd08271d304\" returns successfully" Nov 8 00:06:57.595514 containerd[2017]: time="2025-11-08T00:06:57.594740873Z" level=info msg="StopPodSandbox for \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\"" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.673 [WARNING][6285] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"049aea22-2859-4d2c-978e-0ff4ef7d540d", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277", Pod:"coredns-66bc5c9577-b447x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01eb682dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.675 [INFO][6285] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.677 [INFO][6285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" iface="eth0" netns="" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.678 [INFO][6285] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.678 [INFO][6285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.719 [INFO][6292] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.720 [INFO][6292] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.720 [INFO][6292] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.734 [WARNING][6292] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.734 [INFO][6292] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.739 [INFO][6292] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:57.745172 containerd[2017]: 2025-11-08 00:06:57.742 [INFO][6285] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.746613 containerd[2017]: time="2025-11-08T00:06:57.746116722Z" level=info msg="TearDown network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\" successfully" Nov 8 00:06:57.746613 containerd[2017]: time="2025-11-08T00:06:57.746196138Z" level=info msg="StopPodSandbox for \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\" returns successfully" Nov 8 00:06:57.747538 containerd[2017]: time="2025-11-08T00:06:57.747281634Z" level=info msg="RemovePodSandbox for \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\"" Nov 8 00:06:57.747538 containerd[2017]: time="2025-11-08T00:06:57.747460362Z" level=info msg="Forcibly stopping sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\"" Nov 8 00:06:57.902213 systemd[1]: Started sshd@20-172.31.26.1:22-139.178.89.65:35788.service - OpenSSH per-connection server daemon (139.178.89.65:35788). Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.813 [WARNING][6307] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"049aea22-2859-4d2c-978e-0ff4ef7d540d", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"955557cc8a2e1f3cf8594877c7459855375ee03c4dbb0386fc205464f4050277", Pod:"coredns-66bc5c9577-b447x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia01eb682dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.813 [INFO][6307] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.813 [INFO][6307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" iface="eth0" netns="" Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.813 [INFO][6307] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.814 [INFO][6307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.859 [INFO][6314] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.860 [INFO][6314] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.860 [INFO][6314] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.892 [WARNING][6314] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.892 [INFO][6314] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" HandleID="k8s-pod-network.394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Workload="ip--172--31--26--1-k8s-coredns--66bc5c9577--b447x-eth0" Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.907 [INFO][6314] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:57.923055 containerd[2017]: 2025-11-08 00:06:57.914 [INFO][6307] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2" Nov 8 00:06:57.923055 containerd[2017]: time="2025-11-08T00:06:57.921206550Z" level=info msg="TearDown network for sandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\" successfully" Nov 8 00:06:57.931092 containerd[2017]: time="2025-11-08T00:06:57.930837150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:57.931092 containerd[2017]: time="2025-11-08T00:06:57.930932562Z" level=info msg="RemovePodSandbox \"394ced78757edf825c2767f72549dace97716e64783ac8dafa52e2c9a89228f2\" returns successfully" Nov 8 00:06:57.933754 containerd[2017]: time="2025-11-08T00:06:57.933360930Z" level=info msg="StopPodSandbox for \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\"" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.005 [WARNING][6331] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0", GenerateName:"calico-kube-controllers-6cf595bbd4-", Namespace:"calico-system", SelfLink:"", UID:"33ee737d-9bb0-44ae-abd4-ed2fcc115154", ResourceVersion:"1404", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf595bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c", Pod:"calico-kube-controllers-6cf595bbd4-gjrhj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f7c1863073", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.006 [INFO][6331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.006 [INFO][6331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" iface="eth0" netns="" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.006 [INFO][6331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.006 [INFO][6331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.042 [INFO][6339] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.043 [INFO][6339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.043 [INFO][6339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.061 [WARNING][6339] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.061 [INFO][6339] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.064 [INFO][6339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:58.070552 containerd[2017]: 2025-11-08 00:06:58.067 [INFO][6331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.070552 containerd[2017]: time="2025-11-08T00:06:58.070502079Z" level=info msg="TearDown network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\" successfully" Nov 8 00:06:58.070552 containerd[2017]: time="2025-11-08T00:06:58.070539147Z" level=info msg="StopPodSandbox for \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\" returns successfully" Nov 8 00:06:58.071560 containerd[2017]: time="2025-11-08T00:06:58.071379387Z" level=info msg="RemovePodSandbox for \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\"" Nov 8 00:06:58.071560 containerd[2017]: time="2025-11-08T00:06:58.071427279Z" level=info msg="Forcibly stopping sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\"" Nov 8 00:06:58.114033 sshd[6322]: Accepted publickey for core from 139.178.89.65 port 35788 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:06:58.116204 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:06:58.128772 systemd-logind[1992]: New session 21 of user core. Nov 8 00:06:58.136378 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.146 [WARNING][6353] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0", GenerateName:"calico-kube-controllers-6cf595bbd4-", Namespace:"calico-system", SelfLink:"", UID:"33ee737d-9bb0-44ae-abd4-ed2fcc115154", ResourceVersion:"1404", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 5, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf595bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-1", ContainerID:"c1a96b4f71d2a1c5d9508f63163c675c4de3a0401997baec0c2fa634cea6c94c", Pod:"calico-kube-controllers-6cf595bbd4-gjrhj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f7c1863073", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.148 [INFO][6353] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.148 [INFO][6353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" iface="eth0" netns="" Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.148 [INFO][6353] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.148 [INFO][6353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.187 [INFO][6361] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.187 [INFO][6361] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.187 [INFO][6361] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.202 [WARNING][6361] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.202 [INFO][6361] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" HandleID="k8s-pod-network.c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Workload="ip--172--31--26--1-k8s-calico--kube--controllers--6cf595bbd4--gjrhj-eth0" Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.205 [INFO][6361] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:06:58.213068 containerd[2017]: 2025-11-08 00:06:58.209 [INFO][6353] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92" Nov 8 00:06:58.213068 containerd[2017]: time="2025-11-08T00:06:58.212980912Z" level=info msg="TearDown network for sandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\" successfully" Nov 8 00:06:58.220679 containerd[2017]: time="2025-11-08T00:06:58.220615276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:06:58.220804 containerd[2017]: time="2025-11-08T00:06:58.220743172Z" level=info msg="RemovePodSandbox \"c1553fb9e7a9bb3ca20205fc07d4d23fc2577fa777f087c3b096e1b1ed425b92\" returns successfully" Nov 8 00:06:58.397908 sshd[6322]: pam_unix(sshd:session): session closed for user core Nov 8 00:06:58.405997 systemd[1]: sshd@20-172.31.26.1:22-139.178.89.65:35788.service: Deactivated successfully. Nov 8 00:06:58.412536 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:06:58.417008 systemd-logind[1992]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:06:58.419596 systemd-logind[1992]: Removed session 21. Nov 8 00:07:01.387091 systemd[1]: run-containerd-runc-k8s.io-21c9a23501ac171d6e4478aeb632701165c66097eac9c7859c3b0f9d4bc65c05-runc.MHL7mi.mount: Deactivated successfully. Nov 8 00:07:01.696089 update_engine[1993]: I20251108 00:07:01.694495 1993 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 8 00:07:01.696089 update_engine[1993]: I20251108 00:07:01.694572 1993 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 8 00:07:01.696089 update_engine[1993]: I20251108 00:07:01.695076 1993 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 8 00:07:01.696781 update_engine[1993]: I20251108 00:07:01.696198 1993 omaha_request_params.cc:62] Current group set to lts Nov 8 00:07:01.696836 locksmithd[2036]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 8 00:07:01.697221 update_engine[1993]: I20251108 00:07:01.696874 1993 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 8 00:07:01.697221 update_engine[1993]: I20251108 00:07:01.696909 1993 update_attempter.cc:643] Scheduling an action processor start. Nov 8 00:07:01.697221 update_engine[1993]: I20251108 00:07:01.696945 1993 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:07:01.697221 update_engine[1993]: I20251108 00:07:01.697052 1993 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 8 00:07:01.697221 update_engine[1993]: I20251108 00:07:01.697166 1993 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:07:01.697221 update_engine[1993]: I20251108 00:07:01.697185 1993 omaha_request_action.cc:272] Request: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: Nov 8 00:07:01.697221 update_engine[1993]: I20251108 00:07:01.697202 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:07:01.702927 update_engine[1993]: I20251108 00:07:01.702621 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:07:01.703367 update_engine[1993]: I20251108 00:07:01.703292 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:07:01.727429 update_engine[1993]: E20251108 00:07:01.727350 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:07:01.727572 update_engine[1993]: I20251108 00:07:01.727495 1993 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 8 00:07:01.863397 kubelet[3480]: E1108 00:07:01.861912 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:07:01.866997 kubelet[3480]: E1108 00:07:01.866792 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:07:02.861889 kubelet[3480]: E1108 00:07:02.860727 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:07:02.864410 kubelet[3480]: E1108 00:07:02.864332 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:07:03.441831 systemd[1]: Started sshd@21-172.31.26.1:22-139.178.89.65:35802.service - OpenSSH per-connection server daemon (139.178.89.65:35802). Nov 8 00:07:03.626060 sshd[6402]: Accepted publickey for core from 139.178.89.65 port 35802 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:03.628351 sshd[6402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:03.637315 systemd-logind[1992]: New session 22 of user core. Nov 8 00:07:03.642280 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:07:03.862383 kubelet[3480]: E1108 00:07:03.862290 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:07:03.882975 sshd[6402]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:03.894482 systemd[1]: sshd@21-172.31.26.1:22-139.178.89.65:35802.service: Deactivated successfully. Nov 8 00:07:03.898934 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:07:03.901331 systemd-logind[1992]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:07:03.904304 systemd-logind[1992]: Removed session 22. Nov 8 00:07:05.863477 kubelet[3480]: E1108 00:07:05.862874 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:07:08.935637 systemd[1]: Started sshd@22-172.31.26.1:22-139.178.89.65:50346.service - OpenSSH per-connection server daemon (139.178.89.65:50346). Nov 8 00:07:09.126767 sshd[6418]: Accepted publickey for core from 139.178.89.65 port 50346 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:09.130525 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:09.146800 systemd-logind[1992]: New session 23 of user core. Nov 8 00:07:09.153375 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:07:09.443733 sshd[6418]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:09.455168 systemd-logind[1992]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:07:09.455759 systemd[1]: sshd@22-172.31.26.1:22-139.178.89.65:50346.service: Deactivated successfully. Nov 8 00:07:09.463265 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:07:09.465660 systemd-logind[1992]: Removed session 23. Nov 8 00:07:11.692178 update_engine[1993]: I20251108 00:07:11.692065 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:07:11.692809 update_engine[1993]: I20251108 00:07:11.692414 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:07:11.692809 update_engine[1993]: I20251108 00:07:11.692730 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:07:11.695239 update_engine[1993]: E20251108 00:07:11.695133 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:07:11.695521 update_engine[1993]: I20251108 00:07:11.695253 1993 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 8 00:07:13.860553 kubelet[3480]: E1108 00:07:13.860430 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:07:14.484603 systemd[1]: Started sshd@23-172.31.26.1:22-139.178.89.65:50350.service - OpenSSH per-connection server daemon (139.178.89.65:50350). Nov 8 00:07:14.680102 sshd[6432]: Accepted publickey for core from 139.178.89.65 port 50350 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:14.682333 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:14.693005 systemd-logind[1992]: New session 24 of user core. Nov 8 00:07:14.701535 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:07:14.864637 kubelet[3480]: E1108 00:07:14.864540 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:07:14.989425 sshd[6432]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:14.997851 systemd[1]: sshd@23-172.31.26.1:22-139.178.89.65:50350.service: Deactivated successfully. Nov 8 00:07:15.006426 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:07:15.009792 systemd-logind[1992]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:07:15.012562 systemd-logind[1992]: Removed session 24. Nov 8 00:07:15.865150 kubelet[3480]: E1108 00:07:15.864967 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:07:17.861985 kubelet[3480]: E1108 00:07:17.861806 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:07:18.862295 kubelet[3480]: E1108 00:07:18.862210 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:07:18.862961 kubelet[3480]: E1108 00:07:18.862454 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:07:20.032216 systemd[1]: Started sshd@24-172.31.26.1:22-139.178.89.65:42838.service - OpenSSH per-connection server daemon (139.178.89.65:42838). Nov 8 00:07:20.223048 sshd[6447]: Accepted publickey for core from 139.178.89.65 port 42838 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:20.224802 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:20.238543 systemd-logind[1992]: New session 25 of user core. Nov 8 00:07:20.245357 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:07:20.542369 sshd[6447]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:20.553417 systemd-logind[1992]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:07:20.554459 systemd[1]: sshd@24-172.31.26.1:22-139.178.89.65:42838.service: Deactivated successfully. Nov 8 00:07:20.563738 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:07:20.575388 systemd-logind[1992]: Removed session 25. Nov 8 00:07:21.692329 update_engine[1993]: I20251108 00:07:21.691565 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:07:21.692329 update_engine[1993]: I20251108 00:07:21.691917 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:07:21.692329 update_engine[1993]: I20251108 00:07:21.692256 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:07:21.693526 update_engine[1993]: E20251108 00:07:21.693473 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:07:21.693753 update_engine[1993]: I20251108 00:07:21.693702 1993 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 8 00:07:25.596225 systemd[1]: Started sshd@25-172.31.26.1:22-139.178.89.65:42852.service - OpenSSH per-connection server daemon (139.178.89.65:42852). Nov 8 00:07:25.804076 sshd[6466]: Accepted publickey for core from 139.178.89.65 port 42852 ssh2: RSA SHA256:tnEXpnDY8gLTej7GJ+T99WI4otIwvlI9IcMNDF42aqw Nov 8 00:07:25.809582 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:07:25.823304 systemd-logind[1992]: New session 26 of user core. Nov 8 00:07:25.828312 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:07:26.149043 sshd[6466]: pam_unix(sshd:session): session closed for user core Nov 8 00:07:26.159880 systemd-logind[1992]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:07:26.160893 systemd[1]: sshd@25-172.31.26.1:22-139.178.89.65:42852.service: Deactivated successfully. Nov 8 00:07:26.167953 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:07:26.170721 systemd-logind[1992]: Removed session 26. Nov 8 00:07:27.871634 kubelet[3480]: E1108 00:07:27.871537 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:07:27.878615 kubelet[3480]: E1108 00:07:27.874998 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:07:28.861670 kubelet[3480]: E1108 00:07:28.861474 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:07:29.860801 kubelet[3480]: E1108 00:07:29.860734 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:07:31.693076 update_engine[1993]: I20251108 00:07:31.692703 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:07:31.693696 update_engine[1993]: I20251108 00:07:31.693142 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:07:31.693696 update_engine[1993]: I20251108 00:07:31.693444 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:07:31.694163 update_engine[1993]: E20251108 00:07:31.694103 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:07:31.694269 update_engine[1993]: I20251108 00:07:31.694193 1993 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 8 00:07:31.694269 update_engine[1993]: I20251108 00:07:31.694216 1993 omaha_request_action.cc:617] Omaha request response: Nov 8 00:07:31.694383 update_engine[1993]: E20251108 00:07:31.694329 1993 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 8 00:07:31.694383 update_engine[1993]: I20251108 00:07:31.694366 1993 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 8 00:07:31.694485 update_engine[1993]: I20251108 00:07:31.694384 1993 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:07:31.694485 update_engine[1993]: I20251108 00:07:31.694401 1993 update_attempter.cc:306] Processing Done. Nov 8 00:07:31.694485 update_engine[1993]: E20251108 00:07:31.694427 1993 update_attempter.cc:619] Update failed. Nov 8 00:07:31.694485 update_engine[1993]: I20251108 00:07:31.694443 1993 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 8 00:07:31.694485 update_engine[1993]: I20251108 00:07:31.694458 1993 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 8 00:07:31.694485 update_engine[1993]: I20251108 00:07:31.694475 1993 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 8 00:07:31.694776 update_engine[1993]: I20251108 00:07:31.694586 1993 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:07:31.694776 update_engine[1993]: I20251108 00:07:31.694624 1993 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:07:31.694776 update_engine[1993]: I20251108 00:07:31.694643 1993 omaha_request_action.cc:272] Request: Nov 8 00:07:31.694776 update_engine[1993]: Nov 8 00:07:31.694776 update_engine[1993]: Nov 8 00:07:31.694776 update_engine[1993]: Nov 8 00:07:31.694776 update_engine[1993]: Nov 8 00:07:31.694776 update_engine[1993]: Nov 8 00:07:31.694776 update_engine[1993]: Nov 8 00:07:31.694776 update_engine[1993]: I20251108 00:07:31.694659 1993 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:07:31.695381 update_engine[1993]: I20251108 00:07:31.694911 1993 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:07:31.695381 update_engine[1993]: I20251108 00:07:31.695206 1993 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:07:31.695832 locksmithd[2036]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 8 00:07:31.696379 update_engine[1993]: E20251108 00:07:31.695904 1993 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:07:31.696379 update_engine[1993]: I20251108 00:07:31.695983 1993 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 8 00:07:31.696379 update_engine[1993]: I20251108 00:07:31.696002 1993 omaha_request_action.cc:617] Omaha request response: Nov 8 00:07:31.696379 update_engine[1993]: I20251108 00:07:31.696059 1993 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:07:31.696379 update_engine[1993]: I20251108 00:07:31.696079 1993 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:07:31.696379 update_engine[1993]: I20251108 00:07:31.696094 1993 update_attempter.cc:306] Processing Done. Nov 8 00:07:31.696379 update_engine[1993]: I20251108 00:07:31.696110 1993 update_attempter.cc:310] Error event sent. Nov 8 00:07:31.696379 update_engine[1993]: I20251108 00:07:31.696132 1993 update_check_scheduler.cc:74] Next update check in 41m38s Nov 8 00:07:31.697140 locksmithd[2036]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 8 00:07:32.861929 containerd[2017]: time="2025-11-08T00:07:32.861615868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:07:33.186356 containerd[2017]: time="2025-11-08T00:07:33.186084866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:33.188830 containerd[2017]: time="2025-11-08T00:07:33.188690366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:07:33.188830 containerd[2017]: time="2025-11-08T00:07:33.188766230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:07:33.189142 kubelet[3480]: E1108 00:07:33.188960 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:33.189142 kubelet[3480]: E1108 00:07:33.189040 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:07:33.190254 kubelet[3480]: E1108 00:07:33.189297 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:33.190354 containerd[2017]: time="2025-11-08T00:07:33.189779054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:33.467688 containerd[2017]: time="2025-11-08T00:07:33.467148399Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:33.469502 containerd[2017]: time="2025-11-08T00:07:33.469362159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:33.469502 containerd[2017]: time="2025-11-08T00:07:33.469409607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:33.469727 kubelet[3480]: E1108 00:07:33.469658 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:33.469727 kubelet[3480]: E1108 00:07:33.469714 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:33.470383 kubelet[3480]: E1108 00:07:33.469973 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-cbxnw_calico-apiserver(ec16be48-232b-457b-bf3a-4db776262475): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:33.470383 kubelet[3480]: E1108 00:07:33.470083 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:07:33.470620 containerd[2017]: time="2025-11-08T00:07:33.470173899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:07:33.732411 containerd[2017]: time="2025-11-08T00:07:33.732238576Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:33.734607 containerd[2017]: time="2025-11-08T00:07:33.734461072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:07:33.734607 containerd[2017]: time="2025-11-08T00:07:33.734498332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:07:33.735328 kubelet[3480]: E1108 00:07:33.734978 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:33.735328 kubelet[3480]: E1108 00:07:33.735066 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:07:33.735328 kubelet[3480]: E1108 00:07:33.735196 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rkzrr_calico-system(105dae3d-b44c-41c4-b31a-bd1432c68a75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:33.735582 kubelet[3480]: E1108 00:07:33.735270 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:07:39.694124 kubelet[3480]: E1108 00:07:39.693794 3480 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-1?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:07:39.863683 containerd[2017]: time="2025-11-08T00:07:39.861933407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:07:40.142162 containerd[2017]: time="2025-11-08T00:07:40.142096820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:40.144522 containerd[2017]: time="2025-11-08T00:07:40.144453488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:07:40.144720 containerd[2017]: time="2025-11-08T00:07:40.144595400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:40.146513 kubelet[3480]: E1108 00:07:40.146210 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:40.146513 kubelet[3480]: E1108 00:07:40.146282 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:07:40.146513 kubelet[3480]: E1108 00:07:40.146397 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-f558bfb5c-2hjdp_calico-apiserver(a0f0a15a-d068-42d7-9057-db8aa3861ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:40.146513 kubelet[3480]: E1108 00:07:40.146451 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-2hjdp" podUID="a0f0a15a-d068-42d7-9057-db8aa3861ce8" Nov 8 00:07:40.150831 systemd[1]: cri-containerd-58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13.scope: Deactivated successfully. Nov 8 00:07:40.151350 systemd[1]: cri-containerd-58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13.scope: Consumed 6.752s CPU time, 17.9M memory peak, 0B memory swap peak. Nov 8 00:07:40.194753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13-rootfs.mount: Deactivated successfully. Nov 8 00:07:40.213756 containerd[2017]: time="2025-11-08T00:07:40.213335240Z" level=info msg="shim disconnected" id=58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13 namespace=k8s.io Nov 8 00:07:40.213756 containerd[2017]: time="2025-11-08T00:07:40.213550748Z" level=warning msg="cleaning up after shim disconnected" id=58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13 namespace=k8s.io Nov 8 00:07:40.213756 containerd[2017]: time="2025-11-08T00:07:40.213596048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:07:40.849890 kubelet[3480]: I1108 00:07:40.849732 3480 scope.go:117] "RemoveContainer" containerID="58dce9ffbf10092e92ff7ae8a36d9c5335b067f0d45afbf38e57500972679f13" Nov 8 00:07:40.856353 containerd[2017]: time="2025-11-08T00:07:40.856295568Z" level=info msg="CreateContainer within sandbox \"23a686dd900185ad8eedb09ebbed88856a5eeb52752fa0976393417798161645\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:07:40.884690 containerd[2017]: time="2025-11-08T00:07:40.884625300Z" level=info msg="CreateContainer within sandbox \"23a686dd900185ad8eedb09ebbed88856a5eeb52752fa0976393417798161645\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"297eb80789e355bbb5f76c4151937a05e6703a9b3fad7d055835b5099b7556d7\"" Nov 8 00:07:40.885470 containerd[2017]: time="2025-11-08T00:07:40.885420768Z" level=info msg="StartContainer for \"297eb80789e355bbb5f76c4151937a05e6703a9b3fad7d055835b5099b7556d7\"" Nov 8 00:07:40.951413 systemd[1]: Started cri-containerd-297eb80789e355bbb5f76c4151937a05e6703a9b3fad7d055835b5099b7556d7.scope - libcontainer container 297eb80789e355bbb5f76c4151937a05e6703a9b3fad7d055835b5099b7556d7. Nov 8 00:07:41.021469 containerd[2017]: time="2025-11-08T00:07:41.021405824Z" level=info msg="StartContainer for \"297eb80789e355bbb5f76c4151937a05e6703a9b3fad7d055835b5099b7556d7\" returns successfully" Nov 8 00:07:41.057749 systemd[1]: cri-containerd-63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3.scope: Deactivated successfully. Nov 8 00:07:41.060352 systemd[1]: cri-containerd-63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3.scope: Consumed 29.933s CPU time. Nov 8 00:07:41.130126 containerd[2017]: time="2025-11-08T00:07:41.129370317Z" level=info msg="shim disconnected" id=63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3 namespace=k8s.io Nov 8 00:07:41.130758 containerd[2017]: time="2025-11-08T00:07:41.130425105Z" level=warning msg="cleaning up after shim disconnected" id=63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3 namespace=k8s.io Nov 8 00:07:41.130758 containerd[2017]: time="2025-11-08T00:07:41.130514169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:07:41.193562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3-rootfs.mount: Deactivated successfully. Nov 8 00:07:41.855706 kubelet[3480]: I1108 00:07:41.855655 3480 scope.go:117] "RemoveContainer" containerID="63a425e28bdc5f50ff3304238244bbdd478f8c342c6ae3ac8858ed4f1d40a2c3" Nov 8 00:07:41.860414 containerd[2017]: time="2025-11-08T00:07:41.859957549Z" level=info msg="CreateContainer within sandbox \"c3c2d44d17786017114d15be7cd9fb702be9f5dbb660bf8527539333bc83ffa1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:07:41.894035 containerd[2017]: time="2025-11-08T00:07:41.893962609Z" level=info msg="CreateContainer within sandbox \"c3c2d44d17786017114d15be7cd9fb702be9f5dbb660bf8527539333bc83ffa1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"16f11e1ef0f494b51fd6959960c474ebd36ba25fe8340508a726690886166afd\"" Nov 8 00:07:41.896036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2401438.mount: Deactivated successfully. Nov 8 00:07:41.898868 containerd[2017]: time="2025-11-08T00:07:41.896307241Z" level=info msg="StartContainer for \"16f11e1ef0f494b51fd6959960c474ebd36ba25fe8340508a726690886166afd\"" Nov 8 00:07:41.959104 systemd[1]: Started cri-containerd-16f11e1ef0f494b51fd6959960c474ebd36ba25fe8340508a726690886166afd.scope - libcontainer container 16f11e1ef0f494b51fd6959960c474ebd36ba25fe8340508a726690886166afd. Nov 8 00:07:42.038540 containerd[2017]: time="2025-11-08T00:07:42.038487502Z" level=info msg="StartContainer for \"16f11e1ef0f494b51fd6959960c474ebd36ba25fe8340508a726690886166afd\" returns successfully" Nov 8 00:07:42.861779 containerd[2017]: time="2025-11-08T00:07:42.861378362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:07:43.152968 containerd[2017]: time="2025-11-08T00:07:43.152775803Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:43.157105 containerd[2017]: time="2025-11-08T00:07:43.156194495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:07:43.157105 containerd[2017]: time="2025-11-08T00:07:43.156269855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:43.157354 kubelet[3480]: E1108 00:07:43.156647 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:43.157354 kubelet[3480]: E1108 00:07:43.156718 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:07:43.157354 kubelet[3480]: E1108 00:07:43.156953 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf595bbd4-gjrhj_calico-system(33ee737d-9bb0-44ae-abd4-ed2fcc115154): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:43.157354 kubelet[3480]: E1108 00:07:43.157004 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf595bbd4-gjrhj" podUID="33ee737d-9bb0-44ae-abd4-ed2fcc115154" Nov 8 00:07:43.160035 containerd[2017]: time="2025-11-08T00:07:43.159598367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:07:43.454519 containerd[2017]: time="2025-11-08T00:07:43.453716953Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:43.456088 containerd[2017]: time="2025-11-08T00:07:43.455897665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:07:43.457218 containerd[2017]: time="2025-11-08T00:07:43.456050857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:07:43.457864 kubelet[3480]: E1108 00:07:43.457510 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:43.457864 kubelet[3480]: E1108 00:07:43.457571 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:07:43.457864 kubelet[3480]: E1108 00:07:43.457670 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:43.459959 containerd[2017]: time="2025-11-08T00:07:43.459652729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:07:43.737210 containerd[2017]: time="2025-11-08T00:07:43.736864982Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:43.740060 containerd[2017]: time="2025-11-08T00:07:43.739779338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:07:43.740060 containerd[2017]: time="2025-11-08T00:07:43.739935482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:07:43.740804 kubelet[3480]: E1108 00:07:43.740471 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:43.740804 kubelet[3480]: E1108 00:07:43.740537 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:07:43.740804 kubelet[3480]: E1108 00:07:43.740638 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7d67fb7489-w2h7r_calico-system(964ee664-ff07-42fc-8d91-b078ca7f25c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:43.741010 kubelet[3480]: E1108 00:07:43.740702 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7d67fb7489-w2h7r" podUID="964ee664-ff07-42fc-8d91-b078ca7f25c8" Nov 8 00:07:44.861531 containerd[2017]: time="2025-11-08T00:07:44.861404284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:07:44.863369 kubelet[3480]: E1108 00:07:44.862581 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f558bfb5c-cbxnw" podUID="ec16be48-232b-457b-bf3a-4db776262475" Nov 8 00:07:45.163346 containerd[2017]: time="2025-11-08T00:07:45.162515749Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:07:45.164874 containerd[2017]: time="2025-11-08T00:07:45.164769133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:07:45.165212 containerd[2017]: time="2025-11-08T00:07:45.165050257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:07:45.165815 kubelet[3480]: E1108 00:07:45.165549 3480 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:45.165815 kubelet[3480]: E1108 00:07:45.165609 3480 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:07:45.165815 kubelet[3480]: E1108 00:07:45.165708 3480 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rdhgb_calico-system(415c772f-4a8a-4df0-8713-cab5820f0205): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:07:45.165815 kubelet[3480]: E1108 00:07:45.165755 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rdhgb" podUID="415c772f-4a8a-4df0-8713-cab5820f0205" Nov 8 00:07:45.854936 systemd[1]: cri-containerd-96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac.scope: Deactivated successfully. Nov 8 00:07:45.855915 systemd[1]: cri-containerd-96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac.scope: Consumed 6.148s CPU time, 16.1M memory peak, 0B memory swap peak. Nov 8 00:07:45.933893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac-rootfs.mount: Deactivated successfully. Nov 8 00:07:45.950405 containerd[2017]: time="2025-11-08T00:07:45.950253461Z" level=info msg="shim disconnected" id=96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac namespace=k8s.io Nov 8 00:07:45.950405 containerd[2017]: time="2025-11-08T00:07:45.950347577Z" level=warning msg="cleaning up after shim disconnected" id=96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac namespace=k8s.io Nov 8 00:07:45.951515 containerd[2017]: time="2025-11-08T00:07:45.950368841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:07:46.894743 kubelet[3480]: I1108 00:07:46.894671 3480 scope.go:117] "RemoveContainer" containerID="96fb357ff6aada544696a79c0ddd477247eeb085ab5aed71508ab4503b2d12ac" Nov 8 00:07:46.900597 containerd[2017]: time="2025-11-08T00:07:46.899279862Z" level=info msg="CreateContainer within sandbox \"9d6e383b0f47bf72abc7c1d175b043077b52552c1f6f161ff250a5eb925207c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:07:46.929051 containerd[2017]: time="2025-11-08T00:07:46.927754374Z" level=info msg="CreateContainer within sandbox \"9d6e383b0f47bf72abc7c1d175b043077b52552c1f6f161ff250a5eb925207c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"43b2ae131abcfb7f4dc2f3c5755affcc185e13f8391226e4a23f448be37ca713\"" Nov 8 00:07:46.930066 containerd[2017]: time="2025-11-08T00:07:46.929950566Z" level=info msg="StartContainer for \"43b2ae131abcfb7f4dc2f3c5755affcc185e13f8391226e4a23f448be37ca713\"" Nov 8 00:07:46.991357 systemd[1]: Started cri-containerd-43b2ae131abcfb7f4dc2f3c5755affcc185e13f8391226e4a23f448be37ca713.scope - libcontainer container 43b2ae131abcfb7f4dc2f3c5755affcc185e13f8391226e4a23f448be37ca713. Nov 8 00:07:47.065847 containerd[2017]: time="2025-11-08T00:07:47.065651487Z" level=info msg="StartContainer for \"43b2ae131abcfb7f4dc2f3c5755affcc185e13f8391226e4a23f448be37ca713\" returns successfully" Nov 8 00:07:48.862434 kubelet[3480]: E1108 00:07:48.862357 3480 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rkzrr" podUID="105dae3d-b44c-41c4-b31a-bd1432c68a75" Nov 8 00:07:49.695687 kubelet[3480]: E1108 00:07:49.695155 3480 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-1?timeout=10s\": context deadline exceeded"