Nov 4 04:19:28.093982 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 4 04:19:28.094025 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Nov 4 03:00:17 -00 2025 Nov 4 04:19:28.094049 kernel: KASLR disabled due to lack of seed Nov 4 04:19:28.094066 kernel: efi: EFI v2.7 by EDK II Nov 4 04:19:28.094081 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Nov 4 04:19:28.094097 kernel: secureboot: Secure boot disabled Nov 4 04:19:28.094114 kernel: ACPI: Early table checksum verification disabled Nov 4 04:19:28.094129 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 4 04:19:28.094145 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 4 04:19:28.094164 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 4 04:19:28.094180 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 4 04:19:28.094195 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 4 04:19:28.094211 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 4 04:19:28.094227 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 4 04:19:28.094249 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 4 04:19:28.094265 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 4 04:19:28.094282 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 4 04:19:28.094298 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 4 04:19:28.094315 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 4 04:19:28.094331 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 4 04:19:28.094348 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 4 04:19:28.094365 kernel: printk: legacy bootconsole [uart0] enabled Nov 4 04:19:28.094381 kernel: ACPI: Use ACPI SPCR as default console: No Nov 4 04:19:28.094398 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 4 04:19:28.094448 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 4 04:19:28.094466 kernel: Zone ranges: Nov 4 04:19:28.094483 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 4 04:19:28.094500 kernel: DMA32 empty Nov 4 04:19:28.094517 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 4 04:19:28.094534 kernel: Device empty Nov 4 04:19:28.094550 kernel: Movable zone start for each node Nov 4 04:19:28.094566 kernel: Early memory node ranges Nov 4 04:19:28.094583 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 4 04:19:28.094600 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 4 04:19:28.094616 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 4 04:19:28.094632 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 4 04:19:28.094652 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 4 04:19:28.094668 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 4 04:19:28.094684 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 4 04:19:28.094701 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 4 04:19:28.094724 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 4 04:19:28.094745 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 4 04:19:28.094763 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 4 04:19:28.094780 kernel: psci: probing for conduit method from ACPI. Nov 4 04:19:28.094797 kernel: psci: PSCIv1.0 detected in firmware. Nov 4 04:19:28.094815 kernel: psci: Using standard PSCI v0.2 function IDs Nov 4 04:19:28.094832 kernel: psci: Trusted OS migration not required Nov 4 04:19:28.094849 kernel: psci: SMC Calling Convention v1.1 Nov 4 04:19:28.094867 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 4 04:19:28.094884 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 4 04:19:28.094905 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 4 04:19:28.094923 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 4 04:19:28.094940 kernel: Detected PIPT I-cache on CPU0 Nov 4 04:19:28.094957 kernel: CPU features: detected: GIC system register CPU interface Nov 4 04:19:28.094974 kernel: CPU features: detected: Spectre-v2 Nov 4 04:19:28.094992 kernel: CPU features: detected: Spectre-v3a Nov 4 04:19:28.095009 kernel: CPU features: detected: Spectre-BHB Nov 4 04:19:28.095026 kernel: CPU features: detected: ARM erratum 1742098 Nov 4 04:19:28.095044 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 4 04:19:28.095062 kernel: alternatives: applying boot alternatives Nov 4 04:19:28.095081 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=184500f7d2eb6ab997bc068a700bccfd199e25e814087e9e73479b28edc9aa9c Nov 4 04:19:28.095103 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 04:19:28.095121 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 04:19:28.095138 kernel: Fallback order for Node 0: 0 Nov 4 04:19:28.095156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 4 04:19:28.095173 kernel: Policy zone: Normal Nov 4 04:19:28.095190 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 04:19:28.095208 kernel: software IO TLB: area num 2. Nov 4 04:19:28.095226 kernel: software IO TLB: mapped [mem 0x000000006fa00000-0x0000000073a00000] (64MB) Nov 4 04:19:28.095243 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 04:19:28.095260 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 04:19:28.095282 kernel: rcu: RCU event tracing is enabled. Nov 4 04:19:28.095300 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 04:19:28.095318 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 04:19:28.095336 kernel: Tracing variant of Tasks RCU enabled. Nov 4 04:19:28.095353 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 04:19:28.095371 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 04:19:28.095389 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:19:28.097847 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:19:28.097871 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 4 04:19:28.097889 kernel: GICv3: 96 SPIs implemented Nov 4 04:19:28.097907 kernel: GICv3: 0 Extended SPIs implemented Nov 4 04:19:28.097932 kernel: Root IRQ handler: gic_handle_irq Nov 4 04:19:28.097949 kernel: GICv3: GICv3 features: 16 PPIs Nov 4 04:19:28.097967 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 4 04:19:28.097984 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 4 04:19:28.098001 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 4 04:19:28.098018 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 4 04:19:28.098037 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 4 04:19:28.098054 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 4 04:19:28.098071 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 4 04:19:28.098089 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 4 04:19:28.098106 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 04:19:28.098127 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 4 04:19:28.098145 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 4 04:19:28.098162 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 4 04:19:28.098180 kernel: Console: colour dummy device 80x25 Nov 4 04:19:28.098199 kernel: printk: legacy console [tty1] enabled Nov 4 04:19:28.098217 kernel: ACPI: Core revision 20240827 Nov 4 04:19:28.098235 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 4 04:19:28.098253 kernel: pid_max: default: 32768 minimum: 301 Nov 4 04:19:28.098275 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 04:19:28.098294 kernel: landlock: Up and running. Nov 4 04:19:28.098311 kernel: SELinux: Initializing. Nov 4 04:19:28.098330 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:19:28.098348 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:19:28.098365 kernel: rcu: Hierarchical SRCU implementation. Nov 4 04:19:28.098384 kernel: rcu: Max phase no-delay instances is 400. Nov 4 04:19:28.099489 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 04:19:28.099537 kernel: Remapping and enabling EFI services. Nov 4 04:19:28.099555 kernel: smp: Bringing up secondary CPUs ... Nov 4 04:19:28.099573 kernel: Detected PIPT I-cache on CPU1 Nov 4 04:19:28.099592 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 4 04:19:28.099610 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 4 04:19:28.099629 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 4 04:19:28.099647 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 04:19:28.099669 kernel: SMP: Total of 2 processors activated. Nov 4 04:19:28.099687 kernel: CPU: All CPU(s) started at EL1 Nov 4 04:19:28.099715 kernel: CPU features: detected: 32-bit EL0 Support Nov 4 04:19:28.099738 kernel: CPU features: detected: 32-bit EL1 Support Nov 4 04:19:28.099757 kernel: CPU features: detected: CRC32 instructions Nov 4 04:19:28.099775 kernel: alternatives: applying system-wide alternatives Nov 4 04:19:28.099795 kernel: Memory: 3823660K/4030464K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12288K init, 1038K bss, 185460K reserved, 16384K cma-reserved) Nov 4 04:19:28.099814 kernel: devtmpfs: initialized Nov 4 04:19:28.099837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 04:19:28.099856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 04:19:28.099894 kernel: 23712 pages in range for non-PLT usage Nov 4 04:19:28.099917 kernel: 515232 pages in range for PLT usage Nov 4 04:19:28.099936 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 04:19:28.099960 kernel: SMBIOS 3.0.0 present. Nov 4 04:19:28.099979 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 4 04:19:28.099997 kernel: DMI: Memory slots populated: 0/0 Nov 4 04:19:28.100016 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 04:19:28.100035 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 4 04:19:28.100054 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 4 04:19:28.100073 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 4 04:19:28.100095 kernel: audit: initializing netlink subsys (disabled) Nov 4 04:19:28.100115 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Nov 4 04:19:28.100133 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 04:19:28.100152 kernel: cpuidle: using governor menu Nov 4 04:19:28.100171 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 4 04:19:28.100189 kernel: ASID allocator initialised with 65536 entries Nov 4 04:19:28.100208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 04:19:28.100231 kernel: Serial: AMBA PL011 UART driver Nov 4 04:19:28.100250 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 04:19:28.100269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 04:19:28.100287 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 4 04:19:28.100306 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 4 04:19:28.100325 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 04:19:28.100344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 04:19:28.100367 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 4 04:19:28.100386 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 4 04:19:28.101451 kernel: ACPI: Added _OSI(Module Device) Nov 4 04:19:28.101486 kernel: ACPI: Added _OSI(Processor Device) Nov 4 04:19:28.101505 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 04:19:28.101524 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 04:19:28.101543 kernel: ACPI: Interpreter enabled Nov 4 04:19:28.101569 kernel: ACPI: Using GIC for interrupt routing Nov 4 04:19:28.101588 kernel: ACPI: MCFG table detected, 1 entries Nov 4 04:19:28.101606 kernel: ACPI: CPU0 has been hot-added Nov 4 04:19:28.101625 kernel: ACPI: CPU1 has been hot-added Nov 4 04:19:28.101644 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 4 04:19:28.102001 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 04:19:28.102253 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 4 04:19:28.104536 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 4 04:19:28.104828 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 4 04:19:28.105080 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 4 04:19:28.105106 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 4 04:19:28.105126 kernel: acpiphp: Slot [1] registered Nov 4 04:19:28.105145 kernel: acpiphp: Slot [2] registered Nov 4 04:19:28.105172 kernel: acpiphp: Slot [3] registered Nov 4 04:19:28.105192 kernel: acpiphp: Slot [4] registered Nov 4 04:19:28.105210 kernel: acpiphp: Slot [5] registered Nov 4 04:19:28.105229 kernel: acpiphp: Slot [6] registered Nov 4 04:19:28.105248 kernel: acpiphp: Slot [7] registered Nov 4 04:19:28.105267 kernel: acpiphp: Slot [8] registered Nov 4 04:19:28.105285 kernel: acpiphp: Slot [9] registered Nov 4 04:19:28.105304 kernel: acpiphp: Slot [10] registered Nov 4 04:19:28.105327 kernel: acpiphp: Slot [11] registered Nov 4 04:19:28.105345 kernel: acpiphp: Slot [12] registered Nov 4 04:19:28.105364 kernel: acpiphp: Slot [13] registered Nov 4 04:19:28.105382 kernel: acpiphp: Slot [14] registered Nov 4 04:19:28.106463 kernel: acpiphp: Slot [15] registered Nov 4 04:19:28.106503 kernel: acpiphp: Slot [16] registered Nov 4 04:19:28.106523 kernel: acpiphp: Slot [17] registered Nov 4 04:19:28.106549 kernel: acpiphp: Slot [18] registered Nov 4 04:19:28.106569 kernel: acpiphp: Slot [19] registered Nov 4 04:19:28.106587 kernel: acpiphp: Slot [20] registered Nov 4 04:19:28.106606 kernel: acpiphp: Slot [21] registered Nov 4 04:19:28.106625 kernel: acpiphp: Slot [22] registered Nov 4 04:19:28.106643 kernel: acpiphp: Slot [23] registered Nov 4 04:19:28.106662 kernel: acpiphp: Slot [24] registered Nov 4 04:19:28.106684 kernel: acpiphp: Slot [25] registered Nov 4 04:19:28.106703 kernel: acpiphp: Slot [26] registered Nov 4 04:19:28.106721 kernel: acpiphp: Slot [27] registered Nov 4 04:19:28.106739 kernel: acpiphp: Slot [28] registered Nov 4 04:19:28.106758 kernel: acpiphp: Slot [29] registered Nov 4 04:19:28.106777 kernel: acpiphp: Slot [30] registered Nov 4 04:19:28.106796 kernel: acpiphp: Slot [31] registered Nov 4 04:19:28.106814 kernel: PCI host bridge to bus 0000:00 Nov 4 04:19:28.107117 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 4 04:19:28.107343 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 4 04:19:28.107607 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 4 04:19:28.107837 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 4 04:19:28.108158 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 4 04:19:28.110700 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 4 04:19:28.111001 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 4 04:19:28.111287 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 4 04:19:28.114283 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 4 04:19:28.114654 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 4 04:19:28.114942 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 4 04:19:28.115209 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 4 04:19:28.115559 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 4 04:19:28.115825 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 4 04:19:28.116105 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 4 04:19:28.116356 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Nov 4 04:19:28.117069 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Nov 4 04:19:28.117349 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Nov 4 04:19:28.117630 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Nov 4 04:19:28.117924 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Nov 4 04:19:28.118185 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 4 04:19:28.118429 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 4 04:19:28.118693 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 4 04:19:28.118720 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 4 04:19:28.118740 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 4 04:19:28.118759 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 4 04:19:28.118779 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 4 04:19:28.118797 kernel: iommu: Default domain type: Translated Nov 4 04:19:28.118816 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 4 04:19:28.118841 kernel: efivars: Registered efivars operations Nov 4 04:19:28.118859 kernel: vgaarb: loaded Nov 4 04:19:28.118902 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 4 04:19:28.118924 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 04:19:28.118943 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 04:19:28.118962 kernel: pnp: PnP ACPI init Nov 4 04:19:28.120302 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 4 04:19:28.120352 kernel: pnp: PnP ACPI: found 1 devices Nov 4 04:19:28.120372 kernel: NET: Registered PF_INET protocol family Nov 4 04:19:28.120391 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 04:19:28.120431 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 04:19:28.120483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 04:19:28.120503 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 04:19:28.120522 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 04:19:28.120547 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 04:19:28.120566 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:19:28.120585 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:19:28.120603 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 04:19:28.120622 kernel: PCI: CLS 0 bytes, default 64 Nov 4 04:19:28.120641 kernel: kvm [1]: HYP mode not available Nov 4 04:19:28.120659 kernel: Initialise system trusted keyrings Nov 4 04:19:28.120682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 04:19:28.120701 kernel: Key type asymmetric registered Nov 4 04:19:28.120720 kernel: Asymmetric key parser 'x509' registered Nov 4 04:19:28.120738 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 4 04:19:28.120757 kernel: io scheduler mq-deadline registered Nov 4 04:19:28.120776 kernel: io scheduler kyber registered Nov 4 04:19:28.120795 kernel: io scheduler bfq registered Nov 4 04:19:28.121066 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 4 04:19:28.121093 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 4 04:19:28.121113 kernel: ACPI: button: Power Button [PWRB] Nov 4 04:19:28.121132 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 4 04:19:28.121151 kernel: ACPI: button: Sleep Button [SLPB] Nov 4 04:19:28.121169 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 04:19:28.121194 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 4 04:19:28.122502 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 4 04:19:28.122533 kernel: printk: legacy console [ttyS0] disabled Nov 4 04:19:28.122555 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 4 04:19:28.122575 kernel: printk: legacy console [ttyS0] enabled Nov 4 04:19:28.122595 kernel: printk: legacy bootconsole [uart0] disabled Nov 4 04:19:28.122614 kernel: thunder_xcv, ver 1.0 Nov 4 04:19:28.122639 kernel: thunder_bgx, ver 1.0 Nov 4 04:19:28.122658 kernel: nicpf, ver 1.0 Nov 4 04:19:28.122677 kernel: nicvf, ver 1.0 Nov 4 04:19:28.122958 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 4 04:19:28.123199 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-04T04:19:24 UTC (1762229964) Nov 4 04:19:28.123226 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 4 04:19:28.123246 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 4 04:19:28.123269 kernel: watchdog: NMI not fully supported Nov 4 04:19:28.123289 kernel: NET: Registered PF_INET6 protocol family Nov 4 04:19:28.123308 kernel: watchdog: Hard watchdog permanently disabled Nov 4 04:19:28.123326 kernel: Segment Routing with IPv6 Nov 4 04:19:28.123346 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 04:19:28.123364 kernel: NET: Registered PF_PACKET protocol family Nov 4 04:19:28.123383 kernel: Key type dns_resolver registered Nov 4 04:19:28.123434 kernel: registered taskstats version 1 Nov 4 04:19:28.123456 kernel: Loading compiled-in X.509 certificates Nov 4 04:19:28.123475 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 7aed512aae99c1c97b09aaf8f37cb7318f15f6e6' Nov 4 04:19:28.123494 kernel: Demotion targets for Node 0: null Nov 4 04:19:28.123513 kernel: Key type .fscrypt registered Nov 4 04:19:28.123531 kernel: Key type fscrypt-provisioning registered Nov 4 04:19:28.123550 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 04:19:28.123574 kernel: ima: Allocated hash algorithm: sha1 Nov 4 04:19:28.123593 kernel: ima: No architecture policies found Nov 4 04:19:28.123612 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 4 04:19:28.123630 kernel: clk: Disabling unused clocks Nov 4 04:19:28.123649 kernel: PM: genpd: Disabling unused power domains Nov 4 04:19:28.123668 kernel: Freeing unused kernel memory: 12288K Nov 4 04:19:28.123687 kernel: Run /init as init process Nov 4 04:19:28.123709 kernel: with arguments: Nov 4 04:19:28.123728 kernel: /init Nov 4 04:19:28.123746 kernel: with environment: Nov 4 04:19:28.123764 kernel: HOME=/ Nov 4 04:19:28.123783 kernel: TERM=linux Nov 4 04:19:28.123803 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 4 04:19:28.127218 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 4 04:19:28.127479 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 4 04:19:28.127509 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 04:19:28.127529 kernel: GPT:25804799 != 33554431 Nov 4 04:19:28.127548 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 04:19:28.127566 kernel: GPT:25804799 != 33554431 Nov 4 04:19:28.127585 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 04:19:28.127603 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 4 04:19:28.127628 kernel: SCSI subsystem initialized Nov 4 04:19:28.127647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 04:19:28.127666 kernel: device-mapper: uevent: version 1.0.3 Nov 4 04:19:28.127685 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 04:19:28.127705 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 4 04:19:28.127724 kernel: raid6: neonx8 gen() 6587 MB/s Nov 4 04:19:28.127743 kernel: raid6: neonx4 gen() 6546 MB/s Nov 4 04:19:28.127766 kernel: raid6: neonx2 gen() 5453 MB/s Nov 4 04:19:28.127785 kernel: raid6: neonx1 gen() 3934 MB/s Nov 4 04:19:28.127825 kernel: raid6: int64x8 gen() 3628 MB/s Nov 4 04:19:28.127848 kernel: raid6: int64x4 gen() 3676 MB/s Nov 4 04:19:28.127868 kernel: raid6: int64x2 gen() 3569 MB/s Nov 4 04:19:28.127909 kernel: raid6: int64x1 gen() 2718 MB/s Nov 4 04:19:28.127929 kernel: raid6: using algorithm neonx8 gen() 6587 MB/s Nov 4 04:19:28.127954 kernel: raid6: .... xor() 4716 MB/s, rmw enabled Nov 4 04:19:28.127973 kernel: raid6: using neon recovery algorithm Nov 4 04:19:28.127992 kernel: xor: measuring software checksum speed Nov 4 04:19:28.128011 kernel: 8regs : 12936 MB/sec Nov 4 04:19:28.128030 kernel: 32regs : 13010 MB/sec Nov 4 04:19:28.128049 kernel: arm64_neon : 8847 MB/sec Nov 4 04:19:28.128067 kernel: xor: using function: 32regs (13010 MB/sec) Nov 4 04:19:28.128091 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 04:19:28.128110 kernel: BTRFS: device fsid 6c40df93-4adb-43f9-9606-1e6831c4440e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (222) Nov 4 04:19:28.128130 kernel: BTRFS info (device dm-0): first mount of filesystem 6c40df93-4adb-43f9-9606-1e6831c4440e Nov 4 04:19:28.128149 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 4 04:19:28.128168 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 04:19:28.128188 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 04:19:28.128207 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 04:19:28.128230 kernel: loop: module loaded Nov 4 04:19:28.128249 kernel: loop0: detected capacity change from 0 to 91480 Nov 4 04:19:28.128268 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 04:19:28.128290 systemd[1]: Successfully made /usr/ read-only. Nov 4 04:19:28.128316 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:19:28.128338 systemd[1]: Detected virtualization amazon. Nov 4 04:19:28.128363 systemd[1]: Detected architecture arm64. Nov 4 04:19:28.128384 systemd[1]: Running in initrd. Nov 4 04:19:28.128433 systemd[1]: No hostname configured, using default hostname. Nov 4 04:19:28.128459 systemd[1]: Hostname set to . Nov 4 04:19:28.128480 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:19:28.128500 systemd[1]: Queued start job for default target initrd.target. Nov 4 04:19:28.128537 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:19:28.128562 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:19:28.128584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:19:28.128606 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 04:19:28.128628 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:19:28.128654 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 04:19:28.128677 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 04:19:28.128699 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:19:28.128720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:19:28.128742 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:19:28.128763 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:19:28.128788 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:19:28.128809 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:19:28.128830 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:19:28.128852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:19:28.128873 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:19:28.128895 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 04:19:28.128916 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 04:19:28.128941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:19:28.128962 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:19:28.129007 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:19:28.129031 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:19:28.129053 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 04:19:28.129075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 04:19:28.129097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:19:28.129123 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 04:19:28.129146 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 04:19:28.129167 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 04:19:28.129192 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:19:28.129218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:19:28.129239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:19:28.129262 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 04:19:28.129288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:19:28.129310 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:19:28.129332 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 04:19:28.129353 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 04:19:28.129452 systemd-journald[358]: Collecting audit messages is disabled. Nov 4 04:19:28.129503 kernel: Bridge firewalling registered Nov 4 04:19:28.129525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:19:28.129547 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:19:28.129569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:19:28.129590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:19:28.129616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 04:19:28.129637 systemd-journald[358]: Journal started Nov 4 04:19:28.129673 systemd-journald[358]: Runtime Journal (/run/log/journal/ec2c63d81c5fb9c05ff2b358c93054fd) is 8M, max 75.3M, 67.3M free. Nov 4 04:19:28.089478 systemd-modules-load[360]: Inserted module 'br_netfilter' Nov 4 04:19:28.152992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:19:28.155445 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:19:28.165059 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:19:28.179170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:19:28.184792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:19:28.200603 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:19:28.227200 systemd-tmpfiles[386]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 04:19:28.236797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:19:28.251903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:19:28.269622 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 04:19:28.338861 systemd-resolved[385]: Positive Trust Anchors: Nov 4 04:19:28.338896 systemd-resolved[385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:19:28.338904 systemd-resolved[385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:19:28.338966 systemd-resolved[385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:19:28.371501 dracut-cmdline[399]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=184500f7d2eb6ab997bc068a700bccfd199e25e814087e9e73479b28edc9aa9c Nov 4 04:19:28.584440 kernel: Loading iSCSI transport class v2.0-870. Nov 4 04:19:28.638435 kernel: random: crng init done Nov 4 04:19:28.639056 systemd-resolved[385]: Defaulting to hostname 'linux'. Nov 4 04:19:28.643097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:19:28.649634 kernel: iscsi: registered transport (tcp) Nov 4 04:19:28.649312 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:19:28.716559 kernel: iscsi: registered transport (qla4xxx) Nov 4 04:19:28.716645 kernel: QLogic iSCSI HBA Driver Nov 4 04:19:28.756033 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:19:28.784140 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:19:28.787132 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:19:28.867665 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 04:19:28.871009 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 04:19:28.873716 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 04:19:28.943926 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:19:28.954472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:19:29.012464 systemd-udevd[637]: Using default interface naming scheme 'v257'. Nov 4 04:19:29.033010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:19:29.039545 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 04:19:29.096364 dracut-pre-trigger[699]: rd.md=0: removing MD RAID activation Nov 4 04:19:29.102231 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:19:29.112673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:19:29.165130 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:19:29.172269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:19:29.210376 systemd-networkd[750]: lo: Link UP Nov 4 04:19:29.210395 systemd-networkd[750]: lo: Gained carrier Nov 4 04:19:29.212706 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:19:29.222466 systemd[1]: Reached target network.target - Network. Nov 4 04:19:29.330984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:19:29.340809 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 04:19:29.580153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:19:29.584690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:19:29.589978 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:19:29.602764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:19:29.614441 kernel: nvme nvme0: using unchecked data buffer Nov 4 04:19:29.614780 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 4 04:19:29.614822 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 4 04:19:29.624228 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 4 04:19:29.624699 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 4 04:19:29.646443 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a0:0a:3a:31:d9 Nov 4 04:19:29.649102 (udev-worker)[805]: Network interface NamePolicy= disabled on kernel command line. Nov 4 04:19:29.668685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:19:29.675805 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:19:29.675819 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:19:29.686897 systemd-networkd[750]: eth0: Link UP Nov 4 04:19:29.688537 systemd-networkd[750]: eth0: Gained carrier Nov 4 04:19:29.688560 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:19:29.700511 systemd-networkd[750]: eth0: DHCPv4 address 172.31.28.40/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 4 04:19:29.800137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 4 04:19:29.807856 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 04:19:29.844651 disk-uuid[879]: Primary Header is updated. Nov 4 04:19:29.844651 disk-uuid[879]: Secondary Entries is updated. Nov 4 04:19:29.844651 disk-uuid[879]: Secondary Header is updated. Nov 4 04:19:29.883329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 4 04:19:29.914610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 4 04:19:29.963139 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 4 04:19:30.287490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 04:19:30.307469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:19:30.312035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:19:30.317073 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:19:30.321005 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 04:19:30.359090 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:19:30.982828 disk-uuid[891]: Warning: The kernel is still using the old partition table. Nov 4 04:19:30.982828 disk-uuid[891]: The new table will be used at the next reboot or after you Nov 4 04:19:30.982828 disk-uuid[891]: run partprobe(8) or kpartx(8) Nov 4 04:19:30.982828 disk-uuid[891]: The operation has completed successfully. Nov 4 04:19:30.997967 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 04:19:30.998242 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 04:19:31.007648 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 04:19:31.068452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1095) Nov 4 04:19:31.073275 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4270646e-f1e7-4973-b114-3d717a76cfde Nov 4 04:19:31.073325 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 4 04:19:31.112030 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 04:19:31.112113 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 04:19:31.121454 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4270646e-f1e7-4973-b114-3d717a76cfde Nov 4 04:19:31.123764 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 04:19:31.130213 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 04:19:31.465623 systemd-networkd[750]: eth0: Gained IPv6LL Nov 4 04:19:32.582196 ignition[1114]: Ignition 2.22.0 Nov 4 04:19:32.582464 ignition[1114]: Stage: fetch-offline Nov 4 04:19:32.583304 ignition[1114]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:19:32.583330 ignition[1114]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 4 04:19:32.584248 ignition[1114]: Ignition finished successfully Nov 4 04:19:32.595137 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:19:32.600562 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 04:19:32.646220 ignition[1124]: Ignition 2.22.0 Nov 4 04:19:32.646763 ignition[1124]: Stage: fetch Nov 4 04:19:32.647297 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:19:32.647318 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 4 04:19:32.647506 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 4 04:19:32.677130 ignition[1124]: PUT result: OK Nov 4 04:19:32.681804 ignition[1124]: parsed url from cmdline: "" Nov 4 04:19:32.681821 ignition[1124]: no config URL provided Nov 4 04:19:32.681836 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:19:32.682137 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:19:32.682171 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 4 04:19:32.690807 ignition[1124]: PUT result: OK Nov 4 04:19:32.690908 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 4 04:19:32.695166 ignition[1124]: GET result: OK Nov 4 04:19:32.695468 ignition[1124]: parsing config with SHA512: 02434844bc981c311e5de515409353b25b4ea029609fa5b0344f9e23bd1e40668b919e3d76f4cd2e78621c33f2e75e0a8321dadbb026d5739806e9d5256b3560 Nov 4 04:19:32.706817 unknown[1124]: fetched base config from "system" Nov 4 04:19:32.707185 unknown[1124]: fetched base config from "system" Nov 4 04:19:32.707913 ignition[1124]: fetch: fetch complete Nov 4 04:19:32.707214 unknown[1124]: fetched user config from "aws" Nov 4 04:19:32.707924 ignition[1124]: fetch: fetch passed Nov 4 04:19:32.708034 ignition[1124]: Ignition finished successfully Nov 4 04:19:32.721564 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 04:19:32.727920 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 04:19:32.780232 ignition[1130]: Ignition 2.22.0 Nov 4 04:19:32.780758 ignition[1130]: Stage: kargs Nov 4 04:19:32.781317 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:19:32.781339 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 4 04:19:32.781549 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 4 04:19:32.790346 ignition[1130]: PUT result: OK Nov 4 04:19:32.795630 ignition[1130]: kargs: kargs passed Nov 4 04:19:32.795724 ignition[1130]: Ignition finished successfully Nov 4 04:19:32.801030 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 04:19:32.807492 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 04:19:32.873901 ignition[1137]: Ignition 2.22.0 Nov 4 04:19:32.874395 ignition[1137]: Stage: disks Nov 4 04:19:32.874966 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:19:32.874987 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 4 04:19:32.875125 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 4 04:19:32.884208 ignition[1137]: PUT result: OK Nov 4 04:19:32.889118 ignition[1137]: disks: disks passed Nov 4 04:19:32.889221 ignition[1137]: Ignition finished successfully Nov 4 04:19:32.893447 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 04:19:32.898752 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 04:19:32.901835 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 04:19:32.906694 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:19:32.909997 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:19:32.917229 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:19:32.924335 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 04:19:33.039167 systemd-fsck[1145]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 4 04:19:33.045681 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 04:19:33.055593 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 04:19:33.310427 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 7cfda2e2-a28a-4bc0-b163-12cbeed348dc r/w with ordered data mode. Quota mode: none. Nov 4 04:19:33.311471 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 04:19:33.312291 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 04:19:33.364512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:19:33.368894 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 04:19:33.373343 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 04:19:33.377591 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 04:19:33.380079 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:19:33.402362 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 04:19:33.408914 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 04:19:33.432462 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1164) Nov 4 04:19:33.432527 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4270646e-f1e7-4973-b114-3d717a76cfde Nov 4 04:19:33.434791 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 4 04:19:33.441937 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 04:19:33.442012 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 04:19:33.444084 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:19:34.555668 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 04:19:34.593534 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Nov 4 04:19:34.602805 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 04:19:34.610702 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 04:19:35.304460 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 04:19:35.307647 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 04:19:35.319077 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 04:19:35.341083 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 04:19:35.344046 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4270646e-f1e7-4973-b114-3d717a76cfde Nov 4 04:19:35.398470 ignition[1277]: INFO : Ignition 2.22.0 Nov 4 04:19:35.398470 ignition[1277]: INFO : Stage: mount Nov 4 04:19:35.398470 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:19:35.398470 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 4 04:19:35.398470 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 4 04:19:35.404131 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 04:19:35.414087 ignition[1277]: INFO : PUT result: OK Nov 4 04:19:35.419185 ignition[1277]: INFO : mount: mount passed Nov 4 04:19:35.421019 ignition[1277]: INFO : Ignition finished successfully Nov 4 04:19:35.424690 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 04:19:35.431323 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 04:19:35.464277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:19:35.516446 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1289) Nov 4 04:19:35.520671 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4270646e-f1e7-4973-b114-3d717a76cfde Nov 4 04:19:35.520719 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 4 04:19:35.527772 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 4 04:19:35.527827 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 4 04:19:35.531663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:19:35.584835 ignition[1306]: INFO : Ignition 2.22.0 Nov 4 04:19:35.584835 ignition[1306]: INFO : Stage: files Nov 4 04:19:35.589471 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:19:35.589471 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 4 04:19:35.589471 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 4 04:19:35.596779 ignition[1306]: INFO : PUT result: OK Nov 4 04:19:35.601777 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Nov 4 04:19:35.653323 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 04:19:35.653323 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 04:19:35.665530 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 04:19:35.669085 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 04:19:35.672542 unknown[1306]: wrote ssh authorized keys file for user: core Nov 4 04:19:35.675077 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 04:19:35.713430 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 04:19:35.713430 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 4 04:19:35.793705 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:19:35.970490 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:19:36.002609 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:19:36.002609 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:19:36.002609 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:19:36.014573 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 04:19:36.020252 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 04:19:36.025838 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 04:19:36.030794 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 4 04:19:36.484051 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 04:19:36.925793 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 4 04:19:36.925793 ignition[1306]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 04:19:36.933490 ignition[1306]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:19:36.940390 ignition[1306]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:19:36.940390 ignition[1306]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 04:19:36.940390 ignition[1306]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 4 04:19:36.940390 ignition[1306]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 04:19:36.954055 ignition[1306]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:19:36.954055 ignition[1306]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:19:36.954055 ignition[1306]: INFO : files: files passed Nov 4 04:19:36.954055 ignition[1306]: INFO : Ignition finished successfully Nov 4 04:19:36.969512 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 04:19:36.974367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 04:19:36.982672 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 04:19:36.998320 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 04:19:37.002481 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 04:19:37.021832 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:19:37.021832 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:19:37.030473 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:19:37.036005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:19:37.039816 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 04:19:37.044530 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 04:19:37.134608 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 04:19:37.136452 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 04:19:37.140695 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 04:19:37.144793 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 04:19:37.151903 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 04:19:37.153360 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 04:19:37.195747 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:19:37.197585 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 04:19:37.236624 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:19:37.237118 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:19:37.244219 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:19:37.244596 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 04:19:37.252108 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 04:19:37.252783 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:19:37.258895 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 04:19:37.264051 systemd[1]: Stopped target basic.target - Basic System. Nov 4 04:19:37.265942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 04:19:37.269852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:19:37.274516 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 04:19:37.279699 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:19:37.284460 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 04:19:37.289185 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:19:37.293616 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 04:19:37.298936 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 04:19:37.303297 systemd[1]: Stopped target swap.target - Swaps. Nov 4 04:19:37.307470 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 04:19:37.307724 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:19:37.317776 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:19:37.320759 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:19:37.325325 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 04:19:37.325534 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:19:37.330855 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 04:19:37.331100 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 04:19:37.343393 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 04:19:37.346160 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:19:37.352000 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 04:19:37.352210 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 04:19:37.360280 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 04:19:37.364358 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 04:19:37.364823 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:19:37.377010 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 04:19:37.380322 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 04:19:37.384971 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:19:37.392259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 04:19:37.392569 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:19:37.395639 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 04:19:37.395898 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:19:37.424132 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 04:19:37.426862 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 04:19:37.458100 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 04:19:37.463992 ignition[1361]: INFO : Ignition 2.22.0 Nov 4 04:19:37.463992 ignition[1361]: INFO : Stage: umount Nov 4 04:19:37.469462 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:19:37.469462 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 4 04:19:37.469462 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 4 04:19:37.477312 ignition[1361]: INFO : PUT result: OK Nov 4 04:19:37.486525 ignition[1361]: INFO : umount: umount passed Nov 4 04:19:37.489457 ignition[1361]: INFO : Ignition finished successfully Nov 4 04:19:37.494895 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 04:19:37.497251 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 04:19:37.497642 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 04:19:37.497751 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 04:19:37.510317 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 04:19:37.510906 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 04:19:37.515348 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 04:19:37.515596 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 04:19:37.524386 systemd[1]: Stopped target network.target - Network. Nov 4 04:19:37.527478 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 04:19:37.529534 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:19:37.536332 systemd[1]: Stopped target paths.target - Path Units. Nov 4 04:19:37.542264 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 04:19:37.544788 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:19:37.547936 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 04:19:37.550506 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 04:19:37.556289 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 04:19:37.556370 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:19:37.558873 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 04:19:37.558944 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:19:37.568653 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 04:19:37.568772 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 04:19:37.578600 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 04:19:37.578706 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 04:19:37.583036 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 04:19:37.587568 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 04:19:37.614092 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 04:19:37.615796 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 04:19:37.625987 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 04:19:37.626398 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 04:19:37.637964 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 04:19:37.638309 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 04:19:37.645967 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 04:19:37.651192 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 04:19:37.651294 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:19:37.656311 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 04:19:37.658243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 04:19:37.666215 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 04:19:37.670248 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 04:19:37.670369 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:19:37.674058 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 04:19:37.674152 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:19:37.696181 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 04:19:37.696285 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 04:19:37.699476 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:19:37.733673 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 04:19:37.735944 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:19:37.744761 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 04:19:37.744879 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 04:19:37.753549 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 04:19:37.754559 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:19:37.761622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 04:19:37.762199 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:19:37.769970 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 04:19:37.770078 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 04:19:37.772822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 04:19:37.772911 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:19:37.782302 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 04:19:37.793033 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 04:19:37.793305 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:19:37.802435 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 04:19:37.803391 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:19:37.810518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:19:37.810623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:19:37.814815 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 04:19:37.827006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 04:19:37.841965 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 04:19:37.842392 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 04:19:37.851681 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 04:19:37.856784 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 04:19:37.891699 systemd[1]: Switching root. Nov 4 04:19:37.965479 systemd-journald[358]: Journal stopped Nov 4 04:19:41.777566 systemd-journald[358]: Received SIGTERM from PID 1 (systemd). Nov 4 04:19:41.777701 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 04:19:41.777746 kernel: SELinux: policy capability open_perms=1 Nov 4 04:19:41.777783 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 04:19:41.777816 kernel: SELinux: policy capability always_check_network=0 Nov 4 04:19:41.777846 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 04:19:41.777879 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 04:19:41.777910 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 04:19:41.777939 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 04:19:41.777978 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 04:19:41.778011 kernel: audit: type=1403 audit(1762229978.678:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 04:19:41.778050 systemd[1]: Successfully loaded SELinux policy in 119.382ms. Nov 4 04:19:41.778102 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.687ms. Nov 4 04:19:41.778138 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:19:41.778171 systemd[1]: Detected virtualization amazon. Nov 4 04:19:41.778200 systemd[1]: Detected architecture arm64. Nov 4 04:19:41.778232 systemd[1]: Detected first boot. Nov 4 04:19:41.778268 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 04:19:41.778298 zram_generator::config[1405]: No configuration found. Nov 4 04:19:41.778337 kernel: NET: Registered PF_VSOCK protocol family Nov 4 04:19:41.778370 systemd[1]: Populated /etc with preset unit settings. Nov 4 04:19:41.778424 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 04:19:41.778487 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 04:19:41.778525 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 04:19:41.778561 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 04:19:41.778594 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 04:19:41.778627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 04:19:41.778658 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 04:19:41.778688 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 04:19:41.778720 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 04:19:41.778756 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 04:19:41.778786 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 04:19:41.778815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:19:41.778846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:19:41.778877 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 04:19:41.778908 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 04:19:41.778937 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 04:19:41.778971 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:19:41.779000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 04:19:41.779032 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:19:41.779065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:19:41.779094 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 04:19:41.779127 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 04:19:41.779159 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 04:19:41.779198 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 04:19:41.779228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:19:41.779260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:19:41.779292 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:19:41.779326 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:19:41.779355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 04:19:41.779385 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 04:19:41.779527 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 04:19:41.779563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:19:41.779597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:19:41.779626 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:19:41.779656 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 04:19:41.779686 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 04:19:41.779715 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 04:19:41.779752 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 04:19:41.779783 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 04:19:41.779815 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 04:19:41.779862 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 04:19:41.779902 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 04:19:41.779933 systemd[1]: Reached target machines.target - Containers. Nov 4 04:19:41.779970 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 04:19:41.780003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:19:41.780035 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:19:41.780067 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 04:19:41.780098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:19:41.780128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:19:41.780157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:19:41.780195 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 04:19:41.780225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:19:41.780255 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 04:19:41.780284 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 04:19:41.780315 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 04:19:41.780346 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 04:19:41.780375 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 04:19:41.780456 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:19:41.780491 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:19:41.780525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:19:41.780555 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:19:41.780591 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 04:19:41.780624 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 04:19:41.780653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:19:41.780684 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 04:19:41.780714 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 04:19:41.780743 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 04:19:41.780772 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 04:19:41.780806 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 04:19:41.780835 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 04:19:41.780868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:19:41.780901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 04:19:41.780931 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 04:19:41.780960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:19:41.781032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:19:41.781075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:19:41.781105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:19:41.781137 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:19:41.781165 kernel: fuse: init (API version 7.41) Nov 4 04:19:41.781197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:19:41.781227 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 04:19:41.781256 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 04:19:41.781288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:19:41.781323 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 04:19:41.781353 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 04:19:41.781382 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 04:19:41.781520 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 04:19:41.781554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 04:19:41.781587 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:19:41.781619 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 04:19:41.781649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:19:41.781679 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 04:19:41.781711 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:19:41.781747 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 04:19:41.781778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:19:41.781808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:19:41.781840 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 04:19:41.781870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 04:19:41.781904 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:19:41.781989 systemd-journald[1487]: Collecting audit messages is disabled. Nov 4 04:19:41.782042 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 04:19:41.782075 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 04:19:41.782108 systemd-journald[1487]: Journal started Nov 4 04:19:41.782156 systemd-journald[1487]: Runtime Journal (/run/log/journal/ec2c63d81c5fb9c05ff2b358c93054fd) is 8M, max 75.3M, 67.3M free. Nov 4 04:19:41.034106 systemd[1]: Queued start job for default target multi-user.target. Nov 4 04:19:41.049126 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 4 04:19:41.050040 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 04:19:41.786881 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 04:19:41.791163 kernel: ACPI: bus type drm_connector registered Nov 4 04:19:41.791204 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:19:41.805504 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 04:19:41.809134 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:19:41.809826 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:19:41.842185 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 04:19:41.845438 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:19:41.849867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 04:19:41.854744 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 04:19:41.861807 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 04:19:41.912829 kernel: loop1: detected capacity change from 0 to 109736 Nov 4 04:19:41.915274 systemd-journald[1487]: Time spent on flushing to /var/log/journal/ec2c63d81c5fb9c05ff2b358c93054fd is 76.331ms for 916 entries. Nov 4 04:19:41.915274 systemd-journald[1487]: System Journal (/var/log/journal/ec2c63d81c5fb9c05ff2b358c93054fd) is 8M, max 588.1M, 580.1M free. Nov 4 04:19:42.025304 systemd-journald[1487]: Received client request to flush runtime journal. Nov 4 04:19:41.927631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:19:41.932536 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 04:19:42.011685 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 04:19:42.017702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:19:42.021978 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:19:42.029975 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 04:19:42.052531 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 04:19:42.074545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:19:42.093609 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 04:19:42.149875 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Nov 4 04:19:42.149918 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Nov 4 04:19:42.166704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:19:42.192555 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 04:19:42.257449 kernel: loop2: detected capacity change from 0 to 100192 Nov 4 04:19:42.378473 systemd-resolved[1552]: Positive Trust Anchors: Nov 4 04:19:42.378970 systemd-resolved[1552]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:19:42.379061 systemd-resolved[1552]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:19:42.379198 systemd-resolved[1552]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:19:42.392449 systemd-resolved[1552]: Defaulting to hostname 'linux'. Nov 4 04:19:42.394871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:19:42.397552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:19:42.628479 kernel: loop3: detected capacity change from 0 to 61504 Nov 4 04:19:42.773144 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 04:19:42.781541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:19:42.812440 kernel: loop4: detected capacity change from 0 to 211168 Nov 4 04:19:42.844602 systemd-udevd[1569]: Using default interface naming scheme 'v257'. Nov 4 04:19:42.851457 kernel: loop5: detected capacity change from 0 to 109736 Nov 4 04:19:42.865490 kernel: loop6: detected capacity change from 0 to 100192 Nov 4 04:19:42.878503 kernel: loop7: detected capacity change from 0 to 61504 Nov 4 04:19:42.893456 kernel: loop1: detected capacity change from 0 to 211168 Nov 4 04:19:42.915530 (sd-merge)[1572]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Nov 4 04:19:42.922161 (sd-merge)[1572]: Merged extensions into '/usr'. Nov 4 04:19:42.930113 systemd[1]: Reload requested from client PID 1521 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 04:19:42.930146 systemd[1]: Reloading... Nov 4 04:19:43.098446 zram_generator::config[1608]: No configuration found. Nov 4 04:19:43.134569 (udev-worker)[1602]: Network interface NamePolicy= disabled on kernel command line. Nov 4 04:19:43.707382 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 04:19:43.708650 systemd[1]: Reloading finished in 777 ms. Nov 4 04:19:43.751768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:19:43.755480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 04:19:43.828784 systemd[1]: Starting ensure-sysext.service... Nov 4 04:19:43.836140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:19:43.843704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:19:43.850784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:19:43.880021 systemd[1]: Reload requested from client PID 1732 ('systemctl') (unit ensure-sysext.service)... Nov 4 04:19:43.880055 systemd[1]: Reloading... Nov 4 04:19:43.962287 systemd-tmpfiles[1737]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 04:19:43.963084 systemd-tmpfiles[1737]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 04:19:43.963984 systemd-tmpfiles[1737]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 04:19:43.964976 systemd-tmpfiles[1737]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 04:19:43.967793 systemd-tmpfiles[1737]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 04:19:43.968556 systemd-tmpfiles[1737]: ACLs are not supported, ignoring. Nov 4 04:19:43.968796 systemd-tmpfiles[1737]: ACLs are not supported, ignoring. Nov 4 04:19:43.982526 systemd-tmpfiles[1737]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:19:43.982894 systemd-tmpfiles[1737]: Skipping /boot Nov 4 04:19:44.007950 systemd-tmpfiles[1737]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:19:44.007973 systemd-tmpfiles[1737]: Skipping /boot Nov 4 04:19:44.123445 zram_generator::config[1813]: No configuration found. Nov 4 04:19:44.575203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 4 04:19:44.579358 systemd[1]: Reloading finished in 698 ms. Nov 4 04:19:44.622310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:19:44.628275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:19:44.693842 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:19:44.700861 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 04:19:44.704267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:19:44.711214 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 04:19:44.719693 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:19:44.730022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:19:44.740591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:19:44.743201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:19:44.746202 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 04:19:44.752040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:19:44.754944 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 04:19:44.761988 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 04:19:44.772482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:19:44.772867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:19:44.773078 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:19:44.784335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:19:44.794123 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:19:44.796806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:19:44.797066 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:19:44.797380 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 04:19:44.816197 systemd[1]: Finished ensure-sysext.service. Nov 4 04:19:44.834335 systemd-networkd[1736]: lo: Link UP Nov 4 04:19:44.834860 systemd-networkd[1736]: lo: Gained carrier Nov 4 04:19:44.838520 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:19:44.844782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:19:44.849798 systemd-networkd[1736]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:19:44.849813 systemd-networkd[1736]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:19:44.860798 systemd-networkd[1736]: eth0: Link UP Nov 4 04:19:44.862932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:19:44.867264 systemd-networkd[1736]: eth0: Gained carrier Nov 4 04:19:44.867316 systemd-networkd[1736]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:19:44.870253 systemd[1]: Reached target network.target - Network. Nov 4 04:19:44.877086 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 04:19:44.885021 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 04:19:44.889693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:19:44.890545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:19:44.895055 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:19:44.896309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:19:44.897146 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:19:44.898569 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:19:44.906223 systemd-networkd[1736]: eth0: DHCPv4 address 172.31.28.40/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 4 04:19:44.917095 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:19:44.917385 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:19:44.928572 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 04:19:44.940754 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 04:19:44.964047 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 04:19:44.984664 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 04:19:45.037376 augenrules[1915]: No rules Nov 4 04:19:45.039869 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:19:45.040319 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:19:45.137278 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 04:19:45.141750 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 04:19:46.313620 systemd-networkd[1736]: eth0: Gained IPv6LL Nov 4 04:19:46.318552 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 04:19:46.321824 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 04:19:47.660428 ldconfig[1872]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 04:19:47.672512 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 04:19:47.677700 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 04:19:47.704788 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 04:19:47.708003 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:19:47.710732 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 04:19:47.713609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 04:19:47.717020 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 04:19:47.724198 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 04:19:47.727419 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 04:19:47.730308 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 04:19:47.730368 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:19:47.732480 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:19:47.735986 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 04:19:47.741191 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 04:19:47.747595 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 04:19:47.750770 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 04:19:47.753835 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 04:19:47.759718 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 04:19:47.762662 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 04:19:47.767185 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 04:19:47.769988 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:19:47.772447 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:19:47.774715 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:19:47.774766 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:19:47.776602 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 04:19:47.783698 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 04:19:47.788888 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 04:19:47.794101 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 04:19:47.803068 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 04:19:47.809009 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 04:19:47.811458 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 04:19:47.825801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:19:47.833510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 04:19:47.839707 systemd[1]: Started ntpd.service - Network Time Service. Nov 4 04:19:47.846821 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 04:19:47.857167 jq[1932]: false Nov 4 04:19:47.859129 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 04:19:47.867723 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 4 04:19:47.891768 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 04:19:47.903144 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 04:19:47.915342 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 04:19:47.918595 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 04:19:47.919481 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 04:19:47.922664 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 04:19:47.930773 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 04:19:47.943930 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 04:19:47.947371 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 04:19:47.949694 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 04:19:47.952266 extend-filesystems[1933]: Found /dev/nvme0n1p6 Nov 4 04:19:47.996609 extend-filesystems[1933]: Found /dev/nvme0n1p9 Nov 4 04:19:47.997744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 04:19:48.007772 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 04:19:48.027018 extend-filesystems[1933]: Checking size of /dev/nvme0n1p9 Nov 4 04:19:48.035679 jq[1949]: true Nov 4 04:19:48.053514 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 04:19:48.056699 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 04:19:48.130789 tar[1954]: linux-arm64/LICENSE Nov 4 04:19:48.135695 tar[1954]: linux-arm64/helm Nov 4 04:19:48.151326 extend-filesystems[1933]: Resized partition /dev/nvme0n1p9 Nov 4 04:19:48.181188 dbus-daemon[1930]: [system] SELinux support is enabled Nov 4 04:19:48.181587 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 04:19:48.193049 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 04:19:48.193131 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 04:19:48.197623 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 04:19:48.197663 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 04:19:48.211769 jq[1976]: true Nov 4 04:19:48.212892 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 04:19:48.224923 dbus-daemon[1930]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1736 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: ntpd 4.2.8p18@1.4062-o Tue Nov 4 02:33:16 UTC 2025 (1): Starting Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: ---------------------------------------------------- Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: ntp-4 is maintained by Network Time Foundation, Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: corporation. Support and training for ntp-4 are Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: available at https://www.nwtime.org/support Nov 4 04:19:48.243104 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: ---------------------------------------------------- Nov 4 04:19:48.241129 ntpd[1936]: ntpd 4.2.8p18@1.4062-o Tue Nov 4 02:33:16 UTC 2025 (1): Starting Nov 4 04:19:48.243450 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 4 04:19:48.241227 ntpd[1936]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 4 04:19:48.250158 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 4 04:19:48.241246 ntpd[1936]: ---------------------------------------------------- Nov 4 04:19:48.241262 ntpd[1936]: ntp-4 is maintained by Network Time Foundation, Nov 4 04:19:48.261107 extend-filesystems[2003]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 04:19:48.241278 ntpd[1936]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 4 04:19:48.284166 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: proto: precision = 0.108 usec (-23) Nov 4 04:19:48.284221 update_engine[1948]: I20251104 04:19:48.264890 1948 main.cc:92] Flatcar Update Engine starting Nov 4 04:19:48.266883 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 4 04:19:48.241295 ntpd[1936]: corporation. Support and training for ntp-4 are Nov 4 04:19:48.241310 ntpd[1936]: available at https://www.nwtime.org/support Nov 4 04:19:48.241326 ntpd[1936]: ---------------------------------------------------- Nov 4 04:19:48.271177 ntpd[1936]: proto: precision = 0.108 usec (-23) Nov 4 04:19:48.290300 ntpd[1936]: basedate set to 2025-10-23 Nov 4 04:19:48.290584 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: basedate set to 2025-10-23 Nov 4 04:19:48.290584 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: gps base set to 2025-10-26 (week 2390) Nov 4 04:19:48.290337 ntpd[1936]: gps base set to 2025-10-26 (week 2390) Nov 4 04:19:48.291756 systemd[1]: Started update-engine.service - Update Engine. Nov 4 04:19:48.303682 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Listen and drop on 0 v6wildcard [::]:123 Nov 4 04:19:48.303682 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 4 04:19:48.303682 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Listen normally on 2 lo 127.0.0.1:123 Nov 4 04:19:48.303682 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Listen normally on 3 eth0 172.31.28.40:123 Nov 4 04:19:48.303682 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Listen normally on 4 lo [::1]:123 Nov 4 04:19:48.303682 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Listen normally on 5 eth0 [fe80::4a0:aff:fe3a:31d9%2]:123 Nov 4 04:19:48.303682 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: Listening on routing socket on fd #22 for interface updates Nov 4 04:19:48.304019 update_engine[1948]: I20251104 04:19:48.297359 1948 update_check_scheduler.cc:74] Next update check in 10m16s Nov 4 04:19:48.299609 ntpd[1936]: Listen and drop on 0 v6wildcard [::]:123 Nov 4 04:19:48.299681 ntpd[1936]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 4 04:19:48.299992 ntpd[1936]: Listen normally on 2 lo 127.0.0.1:123 Nov 4 04:19:48.300037 ntpd[1936]: Listen normally on 3 eth0 172.31.28.40:123 Nov 4 04:19:48.300083 ntpd[1936]: Listen normally on 4 lo [::1]:123 Nov 4 04:19:48.300129 ntpd[1936]: Listen normally on 5 eth0 [fe80::4a0:aff:fe3a:31d9%2]:123 Nov 4 04:19:48.300170 ntpd[1936]: Listening on routing socket on fd #22 for interface updates Nov 4 04:19:48.323436 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Nov 4 04:19:48.332862 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 04:19:48.359243 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 04:19:48.359478 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 04:19:48.359478 ntpd[1936]: 4 Nov 04:19:48 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 04:19:48.359306 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 4 04:19:48.362444 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Nov 4 04:19:48.400502 extend-filesystems[2003]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 4 04:19:48.400502 extend-filesystems[2003]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 4 04:19:48.400502 extend-filesystems[2003]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Nov 4 04:19:48.414552 extend-filesystems[1933]: Resized filesystem in /dev/nvme0n1p9 Nov 4 04:19:48.408094 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 04:19:48.420936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 04:19:48.467445 coreos-metadata[1929]: Nov 04 04:19:48.466 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 4 04:19:48.474600 coreos-metadata[1929]: Nov 04 04:19:48.471 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 4 04:19:48.479781 coreos-metadata[1929]: Nov 04 04:19:48.479 INFO Fetch successful Nov 4 04:19:48.479781 coreos-metadata[1929]: Nov 04 04:19:48.479 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 4 04:19:48.484105 coreos-metadata[1929]: Nov 04 04:19:48.484 INFO Fetch successful Nov 4 04:19:48.484105 coreos-metadata[1929]: Nov 04 04:19:48.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 4 04:19:48.488766 coreos-metadata[1929]: Nov 04 04:19:48.488 INFO Fetch successful Nov 4 04:19:48.488766 coreos-metadata[1929]: Nov 04 04:19:48.488 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 4 04:19:48.491513 coreos-metadata[1929]: Nov 04 04:19:48.491 INFO Fetch successful Nov 4 04:19:48.491513 coreos-metadata[1929]: Nov 04 04:19:48.491 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 4 04:19:48.493694 coreos-metadata[1929]: Nov 04 04:19:48.493 INFO Fetch failed with 404: resource not found Nov 4 04:19:48.493887 coreos-metadata[1929]: Nov 04 04:19:48.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 4 04:19:48.495261 coreos-metadata[1929]: Nov 04 04:19:48.495 INFO Fetch successful Nov 4 04:19:48.495261 coreos-metadata[1929]: Nov 04 04:19:48.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 4 04:19:48.497255 coreos-metadata[1929]: Nov 04 04:19:48.497 INFO Fetch successful Nov 4 04:19:48.497255 coreos-metadata[1929]: Nov 04 04:19:48.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 4 04:19:48.497929 coreos-metadata[1929]: Nov 04 04:19:48.497 INFO Fetch successful Nov 4 04:19:48.497929 coreos-metadata[1929]: Nov 04 04:19:48.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 4 04:19:48.503524 coreos-metadata[1929]: Nov 04 04:19:48.503 INFO Fetch successful Nov 4 04:19:48.503524 coreos-metadata[1929]: Nov 04 04:19:48.503 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 4 04:19:48.503524 coreos-metadata[1929]: Nov 04 04:19:48.503 INFO Fetch successful Nov 4 04:19:48.632559 bash[2027]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:19:48.636243 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 04:19:48.651781 systemd[1]: Starting sshkeys.service... Nov 4 04:19:48.694597 amazon-ssm-agent[2004]: Initializing new seelog logger Nov 4 04:19:48.695142 amazon-ssm-agent[2004]: New Seelog Logger Creation Complete Nov 4 04:19:48.695142 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.695142 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.698180 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 processing appconfig overrides Nov 4 04:19:48.706303 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.706303 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.706303 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 processing appconfig overrides Nov 4 04:19:48.706303 amazon-ssm-agent[2004]: 2025-11-04 04:19:48.6990 INFO Proxy environment variables: Nov 4 04:19:48.706303 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.706303 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.706303 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 processing appconfig overrides Nov 4 04:19:48.725009 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.725009 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:48.725556 amazon-ssm-agent[2004]: 2025/11/04 04:19:48 processing appconfig overrides Nov 4 04:19:48.755389 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 04:19:48.764240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 04:19:48.775859 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 04:19:48.783163 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 04:19:48.795057 systemd-logind[1946]: Watching system buttons on /dev/input/event0 (Power Button) Nov 4 04:19:48.795120 systemd-logind[1946]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 4 04:19:48.800647 systemd-logind[1946]: New seat seat0. Nov 4 04:19:48.802775 amazon-ssm-agent[2004]: 2025-11-04 04:19:48.6991 INFO https_proxy: Nov 4 04:19:48.812208 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 04:19:48.905008 amazon-ssm-agent[2004]: 2025-11-04 04:19:48.6991 INFO http_proxy: Nov 4 04:19:49.006227 amazon-ssm-agent[2004]: 2025-11-04 04:19:48.6991 INFO no_proxy: Nov 4 04:19:49.106701 amazon-ssm-agent[2004]: 2025-11-04 04:19:48.6993 INFO Checking if agent identity type OnPrem can be assumed Nov 4 04:19:49.165365 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 4 04:19:49.171220 dbus-daemon[1930]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 4 04:19:49.177095 dbus-daemon[1930]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2002 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 4 04:19:49.191195 systemd[1]: Starting polkit.service - Authorization Manager... Nov 4 04:19:49.223525 amazon-ssm-agent[2004]: 2025-11-04 04:19:48.7006 INFO Checking if agent identity type EC2 can be assumed Nov 4 04:19:49.244639 locksmithd[2006]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 04:19:49.314311 coreos-metadata[2059]: Nov 04 04:19:49.313 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 4 04:19:49.316149 coreos-metadata[2059]: Nov 04 04:19:49.315 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 4 04:19:49.320305 coreos-metadata[2059]: Nov 04 04:19:49.317 INFO Fetch successful Nov 4 04:19:49.320305 coreos-metadata[2059]: Nov 04 04:19:49.317 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 4 04:19:49.321172 coreos-metadata[2059]: Nov 04 04:19:49.320 INFO Fetch successful Nov 4 04:19:49.323422 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2496 INFO Agent will take identity from EC2 Nov 4 04:19:49.325102 unknown[2059]: wrote ssh authorized keys file for user: core Nov 4 04:19:49.345521 containerd[1975]: time="2025-11-04T04:19:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 04:19:49.353557 containerd[1975]: time="2025-11-04T04:19:49.351980695Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 04:19:49.422849 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2522 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 4 04:19:49.455995 containerd[1975]: time="2025-11-04T04:19:49.453908659Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.836µs" Nov 4 04:19:49.455995 containerd[1975]: time="2025-11-04T04:19:49.453963427Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 04:19:49.455995 containerd[1975]: time="2025-11-04T04:19:49.454031311Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 04:19:49.455995 containerd[1975]: time="2025-11-04T04:19:49.454059223Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 04:19:49.455995 containerd[1975]: time="2025-11-04T04:19:49.454323739Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 04:19:49.455995 containerd[1975]: time="2025-11-04T04:19:49.454355527Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:19:49.459193 containerd[1975]: time="2025-11-04T04:19:49.457993735Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:19:49.459306 update-ssh-keys[2154]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:19:49.464093 containerd[1975]: time="2025-11-04T04:19:49.458044939Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:19:49.464093 containerd[1975]: time="2025-11-04T04:19:49.461179543Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:19:49.464093 containerd[1975]: time="2025-11-04T04:19:49.461219863Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:19:49.464093 containerd[1975]: time="2025-11-04T04:19:49.461260807Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:19:49.464093 containerd[1975]: time="2025-11-04T04:19:49.461283871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:19:49.463525 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 04:19:49.476254 containerd[1975]: time="2025-11-04T04:19:49.470187920Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:19:49.476254 containerd[1975]: time="2025-11-04T04:19:49.470240612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 04:19:49.480882 containerd[1975]: time="2025-11-04T04:19:49.478505792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 04:19:49.480882 containerd[1975]: time="2025-11-04T04:19:49.478951568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:19:49.480882 containerd[1975]: time="2025-11-04T04:19:49.479010644Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:19:49.480882 containerd[1975]: time="2025-11-04T04:19:49.479035868Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 04:19:49.480882 containerd[1975]: time="2025-11-04T04:19:49.479104280Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 04:19:49.479726 systemd[1]: Finished sshkeys.service. Nov 4 04:19:49.490178 containerd[1975]: time="2025-11-04T04:19:49.489951704Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 04:19:49.490178 containerd[1975]: time="2025-11-04T04:19:49.490132256Z" level=info msg="metadata content store policy set" policy=shared Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500380208Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500514164Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500762396Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500790296Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500818928Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500847824Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500875160Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500900288Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.500929184Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.501000032Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.501027980Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.501062480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.501100520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 04:19:49.501851 containerd[1975]: time="2025-11-04T04:19:49.501129512Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501358628Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501394640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501449816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501478124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501503420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501528092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501554984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501590768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501619304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501646316Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501681980Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 04:19:49.502526 containerd[1975]: time="2025-11-04T04:19:49.501728588Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 04:19:49.517163 containerd[1975]: time="2025-11-04T04:19:49.516620228Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 04:19:49.521544 containerd[1975]: time="2025-11-04T04:19:49.517342916Z" level=info msg="Start snapshots syncer" Nov 4 04:19:49.521544 containerd[1975]: time="2025-11-04T04:19:49.517399052Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 04:19:49.521544 containerd[1975]: time="2025-11-04T04:19:49.518043980Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518138204Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518295620Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518554052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518616008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518656472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518694068Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518735144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518764472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518802344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518838800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518876552Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518949464Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.518986076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:19:49.521853 containerd[1975]: time="2025-11-04T04:19:49.519017984Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519052940Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519079796Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519114356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519152012Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519189260Z" level=info msg="runtime interface created" Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519204080Z" level=info msg="created NRI interface" Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519228224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519264116Z" level=info msg="Connect containerd service" Nov 4 04:19:49.522465 containerd[1975]: time="2025-11-04T04:19:49.519317144Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 04:19:49.526820 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2522 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 4 04:19:49.544779 containerd[1975]: time="2025-11-04T04:19:49.543282764Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 04:19:49.626381 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2522 INFO [amazon-ssm-agent] Starting Core Agent Nov 4 04:19:49.734641 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2522 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 4 04:19:49.807745 polkitd[2136]: Started polkitd version 126 Nov 4 04:19:49.834429 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2522 INFO [Registrar] Starting registrar module Nov 4 04:19:49.880683 polkitd[2136]: Loading rules from directory /etc/polkit-1/rules.d Nov 4 04:19:49.882736 polkitd[2136]: Loading rules from directory /run/polkit-1/rules.d Nov 4 04:19:49.882828 polkitd[2136]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 04:19:49.895560 polkitd[2136]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 4 04:19:49.895649 polkitd[2136]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 04:19:49.895735 polkitd[2136]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 4 04:19:49.902800 polkitd[2136]: Finished loading, compiling and executing 2 rules Nov 4 04:19:49.914090 dbus-daemon[1930]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 4 04:19:49.917199 polkitd[2136]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 4 04:19:49.921613 systemd[1]: Started polkit.service - Authorization Manager. Nov 4 04:19:49.934421 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2696 INFO [EC2Identity] Checking disk for registration info Nov 4 04:19:50.008995 systemd-hostnamed[2002]: Hostname set to (transient) Nov 4 04:19:50.009033 systemd-resolved[1552]: System hostname changed to 'ip-172-31-28-40'. Nov 4 04:19:50.034956 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2697 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 4 04:19:50.141497 amazon-ssm-agent[2004]: 2025-11-04 04:19:49.2697 INFO [EC2Identity] Generating registration keypair Nov 4 04:19:50.266645 containerd[1975]: time="2025-11-04T04:19:50.266246587Z" level=info msg="Start subscribing containerd event" Nov 4 04:19:50.266645 containerd[1975]: time="2025-11-04T04:19:50.266339839Z" level=info msg="Start recovering state" Nov 4 04:19:50.267053 containerd[1975]: time="2025-11-04T04:19:50.266850163Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 04:19:50.268085 containerd[1975]: time="2025-11-04T04:19:50.267533371Z" level=info msg="Start event monitor" Nov 4 04:19:50.268085 containerd[1975]: time="2025-11-04T04:19:50.267585775Z" level=info msg="Start cni network conf syncer for default" Nov 4 04:19:50.268085 containerd[1975]: time="2025-11-04T04:19:50.267606427Z" level=info msg="Start streaming server" Nov 4 04:19:50.268085 containerd[1975]: time="2025-11-04T04:19:50.267625819Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 04:19:50.268085 containerd[1975]: time="2025-11-04T04:19:50.267641947Z" level=info msg="runtime interface starting up..." Nov 4 04:19:50.268085 containerd[1975]: time="2025-11-04T04:19:50.267659431Z" level=info msg="starting plugins..." Nov 4 04:19:50.268085 containerd[1975]: time="2025-11-04T04:19:50.267690835Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 04:19:50.276062 containerd[1975]: time="2025-11-04T04:19:50.269581231Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 04:19:50.276062 containerd[1975]: time="2025-11-04T04:19:50.273257648Z" level=info msg="containerd successfully booted in 0.930003s" Nov 4 04:19:50.270171 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 04:19:50.487241 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.4866 INFO [EC2Identity] Checking write access before registering Nov 4 04:19:50.509434 tar[1954]: linux-arm64/README.md Nov 4 04:19:50.550426 amazon-ssm-agent[2004]: 2025/11/04 04:19:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:50.550426 amazon-ssm-agent[2004]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 4 04:19:50.550426 amazon-ssm-agent[2004]: 2025/11/04 04:19:50 processing appconfig overrides Nov 4 04:19:50.555873 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 04:19:50.587626 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.4880 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 4 04:19:50.604705 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.5492 INFO [EC2Identity] EC2 registration was successful. Nov 4 04:19:50.604705 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.5493 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 4 04:19:50.604880 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.5494 INFO [CredentialRefresher] credentialRefresher has started Nov 4 04:19:50.604880 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.5494 INFO [CredentialRefresher] Starting credentials refresher loop Nov 4 04:19:50.604880 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.6042 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 4 04:19:50.604880 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.6046 INFO [CredentialRefresher] Credentials ready Nov 4 04:19:50.687974 amazon-ssm-agent[2004]: 2025-11-04 04:19:50.6048 INFO [CredentialRefresher] Next credential rotation will be in 29.9999916812 minutes Nov 4 04:19:50.883486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:19:50.897300 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:19:51.306880 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 04:19:51.352818 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 04:19:51.360145 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 04:19:51.390203 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 04:19:51.390776 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 04:19:51.396321 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 04:19:51.433022 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 04:19:51.442953 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 04:19:51.449822 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 04:19:51.454711 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 04:19:51.457492 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 04:19:51.461557 systemd[1]: Startup finished in 4.212s (kernel) + 11.615s (initrd) + 12.903s (userspace) = 28.731s. Nov 4 04:19:51.633371 amazon-ssm-agent[2004]: 2025-11-04 04:19:51.6331 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 4 04:19:51.734752 amazon-ssm-agent[2004]: 2025-11-04 04:19:51.6367 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2231) started Nov 4 04:19:51.834895 amazon-ssm-agent[2004]: 2025-11-04 04:19:51.6367 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 4 04:19:51.885814 kubelet[2205]: E1104 04:19:51.885651 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:19:51.892030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:19:51.892368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:19:51.893906 systemd[1]: kubelet.service: Consumed 1.488s CPU time, 260.2M memory peak. Nov 4 04:19:54.856413 systemd-resolved[1552]: Clock change detected. Flushing caches. Nov 4 04:19:54.873912 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 04:19:54.876511 systemd[1]: Started sshd@0-172.31.28.40:22-147.75.109.163:33886.service - OpenSSH per-connection server daemon (147.75.109.163:33886). Nov 4 04:19:55.196860 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 33886 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:19:55.200194 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:19:55.225412 systemd-logind[1946]: New session 1 of user core. Nov 4 04:19:55.227460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 04:19:55.230520 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 04:19:55.266244 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 04:19:55.271255 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 04:19:55.289200 (systemd)[2252]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 04:19:55.293714 systemd-logind[1946]: New session c1 of user core. Nov 4 04:19:55.567167 systemd[2252]: Queued start job for default target default.target. Nov 4 04:19:55.579420 systemd[2252]: Created slice app.slice - User Application Slice. Nov 4 04:19:55.579481 systemd[2252]: Reached target paths.target - Paths. Nov 4 04:19:55.579569 systemd[2252]: Reached target timers.target - Timers. Nov 4 04:19:55.581966 systemd[2252]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 04:19:55.602256 systemd[2252]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 04:19:55.602533 systemd[2252]: Reached target sockets.target - Sockets. Nov 4 04:19:55.602628 systemd[2252]: Reached target basic.target - Basic System. Nov 4 04:19:55.602709 systemd[2252]: Reached target default.target - Main User Target. Nov 4 04:19:55.602789 systemd[2252]: Startup finished in 297ms. Nov 4 04:19:55.602923 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 04:19:55.614583 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 04:19:55.705195 systemd[1]: Started sshd@1-172.31.28.40:22-147.75.109.163:33890.service - OpenSSH per-connection server daemon (147.75.109.163:33890). Nov 4 04:19:55.899249 sshd[2263]: Accepted publickey for core from 147.75.109.163 port 33890 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:19:55.902376 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:19:55.911409 systemd-logind[1946]: New session 2 of user core. Nov 4 04:19:55.922558 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 04:19:55.987349 sshd[2266]: Connection closed by 147.75.109.163 port 33890 Nov 4 04:19:55.987575 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Nov 4 04:19:55.993701 systemd[1]: sshd@1-172.31.28.40:22-147.75.109.163:33890.service: Deactivated successfully. Nov 4 04:19:55.997036 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 04:19:55.998878 systemd-logind[1946]: Session 2 logged out. Waiting for processes to exit. Nov 4 04:19:56.002867 systemd-logind[1946]: Removed session 2. Nov 4 04:19:56.024769 systemd[1]: Started sshd@2-172.31.28.40:22-147.75.109.163:33904.service - OpenSSH per-connection server daemon (147.75.109.163:33904). Nov 4 04:19:56.202544 sshd[2272]: Accepted publickey for core from 147.75.109.163 port 33904 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:19:56.204941 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:19:56.213399 systemd-logind[1946]: New session 3 of user core. Nov 4 04:19:56.220587 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 04:19:56.277358 sshd[2275]: Connection closed by 147.75.109.163 port 33904 Nov 4 04:19:56.277361 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Nov 4 04:19:56.283334 systemd-logind[1946]: Session 3 logged out. Waiting for processes to exit. Nov 4 04:19:56.285862 systemd[1]: sshd@2-172.31.28.40:22-147.75.109.163:33904.service: Deactivated successfully. Nov 4 04:19:56.288733 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 04:19:56.291915 systemd-logind[1946]: Removed session 3. Nov 4 04:19:56.314367 systemd[1]: Started sshd@3-172.31.28.40:22-147.75.109.163:33908.service - OpenSSH per-connection server daemon (147.75.109.163:33908). Nov 4 04:19:56.494985 sshd[2281]: Accepted publickey for core from 147.75.109.163 port 33908 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:19:56.497735 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:19:56.507406 systemd-logind[1946]: New session 4 of user core. Nov 4 04:19:56.514565 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 04:19:56.578450 sshd[2284]: Connection closed by 147.75.109.163 port 33908 Nov 4 04:19:56.579270 sshd-session[2281]: pam_unix(sshd:session): session closed for user core Nov 4 04:19:56.586060 systemd[1]: sshd@3-172.31.28.40:22-147.75.109.163:33908.service: Deactivated successfully. Nov 4 04:19:56.589394 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 04:19:56.591108 systemd-logind[1946]: Session 4 logged out. Waiting for processes to exit. Nov 4 04:19:56.593975 systemd-logind[1946]: Removed session 4. Nov 4 04:19:56.617766 systemd[1]: Started sshd@4-172.31.28.40:22-147.75.109.163:33910.service - OpenSSH per-connection server daemon (147.75.109.163:33910). Nov 4 04:19:56.810852 sshd[2290]: Accepted publickey for core from 147.75.109.163 port 33910 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:19:56.813345 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:19:56.822946 systemd-logind[1946]: New session 5 of user core. Nov 4 04:19:56.829578 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 04:19:57.011028 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 04:19:57.011655 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:19:57.027153 sudo[2294]: pam_unix(sudo:session): session closed for user root Nov 4 04:19:57.051396 sshd[2293]: Connection closed by 147.75.109.163 port 33910 Nov 4 04:19:57.052392 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Nov 4 04:19:57.060499 systemd[1]: sshd@4-172.31.28.40:22-147.75.109.163:33910.service: Deactivated successfully. Nov 4 04:19:57.063434 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 04:19:57.065160 systemd-logind[1946]: Session 5 logged out. Waiting for processes to exit. Nov 4 04:19:57.068179 systemd-logind[1946]: Removed session 5. Nov 4 04:19:57.090158 systemd[1]: Started sshd@5-172.31.28.40:22-147.75.109.163:33926.service - OpenSSH per-connection server daemon (147.75.109.163:33926). Nov 4 04:19:57.280347 sshd[2300]: Accepted publickey for core from 147.75.109.163 port 33926 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:19:57.282779 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:19:57.290839 systemd-logind[1946]: New session 6 of user core. Nov 4 04:19:57.304564 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 04:19:57.347514 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 04:19:57.348100 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:19:57.355906 sudo[2305]: pam_unix(sudo:session): session closed for user root Nov 4 04:19:57.367626 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 04:19:57.368191 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:19:57.385225 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:19:57.445149 augenrules[2327]: No rules Nov 4 04:19:57.447437 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:19:57.448067 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:19:57.450865 sudo[2304]: pam_unix(sudo:session): session closed for user root Nov 4 04:19:57.473450 sshd[2303]: Connection closed by 147.75.109.163 port 33926 Nov 4 04:19:57.473907 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Nov 4 04:19:57.481034 systemd[1]: sshd@5-172.31.28.40:22-147.75.109.163:33926.service: Deactivated successfully. Nov 4 04:19:57.485152 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 04:19:57.486931 systemd-logind[1946]: Session 6 logged out. Waiting for processes to exit. Nov 4 04:19:57.489203 systemd-logind[1946]: Removed session 6. Nov 4 04:19:57.514180 systemd[1]: Started sshd@6-172.31.28.40:22-147.75.109.163:33936.service - OpenSSH per-connection server daemon (147.75.109.163:33936). Nov 4 04:19:57.707111 sshd[2336]: Accepted publickey for core from 147.75.109.163 port 33936 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:19:57.709423 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:19:57.718419 systemd-logind[1946]: New session 7 of user core. Nov 4 04:19:57.727564 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 04:19:57.772138 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 04:19:57.772766 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:19:59.006733 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 04:19:59.021773 (dockerd)[2357]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 04:20:00.056792 dockerd[2357]: time="2025-11-04T04:20:00.056697713Z" level=info msg="Starting up" Nov 4 04:20:00.058434 dockerd[2357]: time="2025-11-04T04:20:00.058374041Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 04:20:00.078615 dockerd[2357]: time="2025-11-04T04:20:00.078475229Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 04:20:00.116011 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1272718159-merged.mount: Deactivated successfully. Nov 4 04:20:00.153035 dockerd[2357]: time="2025-11-04T04:20:00.152773398Z" level=info msg="Loading containers: start." Nov 4 04:20:00.212652 kernel: Initializing XFRM netlink socket Nov 4 04:20:00.716153 (udev-worker)[2378]: Network interface NamePolicy= disabled on kernel command line. Nov 4 04:20:00.788697 systemd-networkd[1736]: docker0: Link UP Nov 4 04:20:00.800058 dockerd[2357]: time="2025-11-04T04:20:00.799893393Z" level=info msg="Loading containers: done." Nov 4 04:20:00.825408 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3191635847-merged.mount: Deactivated successfully. Nov 4 04:20:00.834589 dockerd[2357]: time="2025-11-04T04:20:00.834513261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 04:20:00.834808 dockerd[2357]: time="2025-11-04T04:20:00.834653421Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 04:20:00.835010 dockerd[2357]: time="2025-11-04T04:20:00.834958725Z" level=info msg="Initializing buildkit" Nov 4 04:20:00.885033 dockerd[2357]: time="2025-11-04T04:20:00.884976057Z" level=info msg="Completed buildkit initialization" Nov 4 04:20:00.895853 dockerd[2357]: time="2025-11-04T04:20:00.895765125Z" level=info msg="Daemon has completed initialization" Nov 4 04:20:00.896157 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 04:20:00.897257 dockerd[2357]: time="2025-11-04T04:20:00.896837290Z" level=info msg="API listen on /run/docker.sock" Nov 4 04:20:01.661712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 04:20:01.664525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:20:02.015626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:02.033004 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:20:02.074223 containerd[1975]: time="2025-11-04T04:20:02.074174347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 04:20:02.116542 kubelet[2575]: E1104 04:20:02.116448 2575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:20:02.129539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:20:02.129847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:20:02.134427 systemd[1]: kubelet.service: Consumed 310ms CPU time, 105.1M memory peak. Nov 4 04:20:02.994689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433339760.mount: Deactivated successfully. Nov 4 04:20:04.292158 containerd[1975]: time="2025-11-04T04:20:04.291462046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:04.293455 containerd[1975]: time="2025-11-04T04:20:04.293358658Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=26569173" Nov 4 04:20:04.295899 containerd[1975]: time="2025-11-04T04:20:04.295825054Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:04.302884 containerd[1975]: time="2025-11-04T04:20:04.302783446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:04.304996 containerd[1975]: time="2025-11-04T04:20:04.304617490Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.229531851s" Nov 4 04:20:04.304996 containerd[1975]: time="2025-11-04T04:20:04.304671586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 4 04:20:04.307444 containerd[1975]: time="2025-11-04T04:20:04.307391422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 04:20:05.774581 containerd[1975]: time="2025-11-04T04:20:05.774513506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:05.776279 containerd[1975]: time="2025-11-04T04:20:05.776206370Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23539763" Nov 4 04:20:05.777374 containerd[1975]: time="2025-11-04T04:20:05.777055082Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:05.781776 containerd[1975]: time="2025-11-04T04:20:05.781695854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:05.784383 containerd[1975]: time="2025-11-04T04:20:05.783720866Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.47603176s" Nov 4 04:20:05.784383 containerd[1975]: time="2025-11-04T04:20:05.783778082Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 4 04:20:05.784807 containerd[1975]: time="2025-11-04T04:20:05.784773002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 04:20:06.997349 containerd[1975]: time="2025-11-04T04:20:06.995822284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:06.998046 containerd[1975]: time="2025-11-04T04:20:06.997982572Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=0" Nov 4 04:20:06.998375 containerd[1975]: time="2025-11-04T04:20:06.998307880Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:07.002895 containerd[1975]: time="2025-11-04T04:20:07.002833848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:07.005072 containerd[1975]: time="2025-11-04T04:20:07.005026968Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.220107302s" Nov 4 04:20:07.005209 containerd[1975]: time="2025-11-04T04:20:07.005182236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 4 04:20:07.005887 containerd[1975]: time="2025-11-04T04:20:07.005849160Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 04:20:08.178980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074617708.mount: Deactivated successfully. Nov 4 04:20:08.756731 containerd[1975]: time="2025-11-04T04:20:08.756648857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:08.758835 containerd[1975]: time="2025-11-04T04:20:08.758506913Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=0" Nov 4 04:20:08.760169 containerd[1975]: time="2025-11-04T04:20:08.760111217Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:08.764422 containerd[1975]: time="2025-11-04T04:20:08.764353289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:08.765834 containerd[1975]: time="2025-11-04T04:20:08.765767393Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.759733433s" Nov 4 04:20:08.765936 containerd[1975]: time="2025-11-04T04:20:08.765829181Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 4 04:20:08.766618 containerd[1975]: time="2025-11-04T04:20:08.766485149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 04:20:09.287702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126226328.mount: Deactivated successfully. Nov 4 04:20:10.448355 containerd[1975]: time="2025-11-04T04:20:10.448031453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:10.451555 containerd[1975]: time="2025-11-04T04:20:10.451471181Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Nov 4 04:20:10.454052 containerd[1975]: time="2025-11-04T04:20:10.453952757Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:10.461341 containerd[1975]: time="2025-11-04T04:20:10.459782753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:10.461950 containerd[1975]: time="2025-11-04T04:20:10.461902745Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.695050744s" Nov 4 04:20:10.462068 containerd[1975]: time="2025-11-04T04:20:10.462041273Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 4 04:20:10.463443 containerd[1975]: time="2025-11-04T04:20:10.463381049Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 04:20:10.947303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838577280.mount: Deactivated successfully. Nov 4 04:20:10.962090 containerd[1975]: time="2025-11-04T04:20:10.961997035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:20:10.965996 containerd[1975]: time="2025-11-04T04:20:10.965889680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:20:10.968106 containerd[1975]: time="2025-11-04T04:20:10.968043260Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:20:10.974367 containerd[1975]: time="2025-11-04T04:20:10.974259824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:20:10.975816 containerd[1975]: time="2025-11-04T04:20:10.975577556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.137587ms" Nov 4 04:20:10.975816 containerd[1975]: time="2025-11-04T04:20:10.975638432Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 4 04:20:10.976368 containerd[1975]: time="2025-11-04T04:20:10.976300760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 04:20:11.745407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262444800.mount: Deactivated successfully. Nov 4 04:20:12.162106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 04:20:12.168719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:20:12.549076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:12.562783 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:20:12.639889 kubelet[2768]: E1104 04:20:12.639786 2768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:20:12.645137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:20:12.645905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:20:12.647186 systemd[1]: kubelet.service: Consumed 317ms CPU time, 107.5M memory peak. Nov 4 04:20:17.727354 containerd[1975]: time="2025-11-04T04:20:17.726798013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:17.729503 containerd[1975]: time="2025-11-04T04:20:17.729416353Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Nov 4 04:20:17.730501 containerd[1975]: time="2025-11-04T04:20:17.730396597Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:17.737174 containerd[1975]: time="2025-11-04T04:20:17.737076241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:17.739535 containerd[1975]: time="2025-11-04T04:20:17.739286305Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 6.762910137s" Nov 4 04:20:17.739535 containerd[1975]: time="2025-11-04T04:20:17.739358917Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 4 04:20:19.648581 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 4 04:20:22.662516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 04:20:22.667648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:20:23.024657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:23.041992 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:20:23.115904 kubelet[2814]: E1104 04:20:23.115833 2814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:20:23.121830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:20:23.122700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:20:23.123661 systemd[1]: kubelet.service: Consumed 295ms CPU time, 105.2M memory peak. Nov 4 04:20:24.805483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:24.806482 systemd[1]: kubelet.service: Consumed 295ms CPU time, 105.2M memory peak. Nov 4 04:20:24.816698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:20:24.862528 systemd[1]: Reload requested from client PID 2828 ('systemctl') (unit session-7.scope)... Nov 4 04:20:24.862781 systemd[1]: Reloading... Nov 4 04:20:25.120374 zram_generator::config[2876]: No configuration found. Nov 4 04:20:25.578394 systemd[1]: Reloading finished in 714 ms. Nov 4 04:20:25.666896 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 04:20:25.667253 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 04:20:25.669398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:25.669471 systemd[1]: kubelet.service: Consumed 224ms CPU time, 95M memory peak. Nov 4 04:20:25.672402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:20:26.002835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:26.015475 (kubelet)[2936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:20:26.094585 kubelet[2936]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:20:26.094585 kubelet[2936]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:20:26.095082 kubelet[2936]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:20:26.095082 kubelet[2936]: I1104 04:20:26.094758 2936 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:20:27.180003 kubelet[2936]: I1104 04:20:27.179932 2936 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 04:20:27.180003 kubelet[2936]: I1104 04:20:27.179980 2936 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:20:27.180625 kubelet[2936]: I1104 04:20:27.180393 2936 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:20:27.223203 kubelet[2936]: E1104 04:20:27.223133 2936 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 04:20:27.225283 kubelet[2936]: I1104 04:20:27.225052 2936 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:20:27.238174 kubelet[2936]: I1104 04:20:27.238109 2936 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:20:27.244548 kubelet[2936]: I1104 04:20:27.244494 2936 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:20:27.245179 kubelet[2936]: I1104 04:20:27.245109 2936 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:20:27.245647 kubelet[2936]: I1104 04:20:27.245171 2936 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:20:27.245848 kubelet[2936]: I1104 04:20:27.245787 2936 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:20:27.245848 kubelet[2936]: I1104 04:20:27.245811 2936 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 04:20:27.247574 kubelet[2936]: I1104 04:20:27.247516 2936 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:20:27.256637 kubelet[2936]: I1104 04:20:27.256571 2936 kubelet.go:480] "Attempting to sync node with API server" Nov 4 04:20:27.256783 kubelet[2936]: I1104 04:20:27.256654 2936 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:20:27.258401 kubelet[2936]: I1104 04:20:27.258354 2936 kubelet.go:386] "Adding apiserver pod source" Nov 4 04:20:27.261088 kubelet[2936]: I1104 04:20:27.260961 2936 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:20:27.263383 kubelet[2936]: I1104 04:20:27.262868 2936 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:20:27.263842 kubelet[2936]: E1104 04:20:27.263805 2936 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 04:20:27.264157 kubelet[2936]: E1104 04:20:27.264123 2936 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-40&limit=500&resourceVersion=0\": dial tcp 172.31.28.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 04:20:27.265409 kubelet[2936]: I1104 04:20:27.265376 2936 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:20:27.265853 kubelet[2936]: W1104 04:20:27.265831 2936 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 04:20:27.273374 kubelet[2936]: I1104 04:20:27.273343 2936 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:20:27.273798 kubelet[2936]: I1104 04:20:27.273564 2936 server.go:1289] "Started kubelet" Nov 4 04:20:27.276949 kubelet[2936]: I1104 04:20:27.276884 2936 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:20:27.278618 kubelet[2936]: I1104 04:20:27.278549 2936 server.go:317] "Adding debug handlers to kubelet server" Nov 4 04:20:27.280471 kubelet[2936]: I1104 04:20:27.280278 2936 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:20:27.284954 kubelet[2936]: E1104 04:20:27.281359 2936 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.40:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-40.1874b2df842a4fe5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-40,UID:ip-172-31-28-40,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-40,},FirstTimestamp:2025-11-04 04:20:27.273523173 +0000 UTC m=+1.250676884,LastTimestamp:2025-11-04 04:20:27.273523173 +0000 UTC m=+1.250676884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-40,}" Nov 4 04:20:27.285383 kubelet[2936]: I1104 04:20:27.285270 2936 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:20:27.289348 kubelet[2936]: I1104 04:20:27.288462 2936 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:20:27.290115 kubelet[2936]: I1104 04:20:27.290081 2936 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:20:27.296228 kubelet[2936]: E1104 04:20:27.294990 2936 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-40\" not found" Nov 4 04:20:27.296228 kubelet[2936]: I1104 04:20:27.295045 2936 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:20:27.296765 kubelet[2936]: I1104 04:20:27.296724 2936 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:20:27.296972 kubelet[2936]: I1104 04:20:27.296953 2936 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:20:27.298949 kubelet[2936]: E1104 04:20:27.298889 2936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-40?timeout=10s\": dial tcp 172.31.28.40:6443: connect: connection refused" interval="200ms" Nov 4 04:20:27.300710 kubelet[2936]: E1104 04:20:27.299190 2936 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 04:20:27.300911 kubelet[2936]: I1104 04:20:27.299727 2936 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:20:27.301182 kubelet[2936]: I1104 04:20:27.301145 2936 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:20:27.304093 kubelet[2936]: I1104 04:20:27.304059 2936 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:20:27.305645 kubelet[2936]: E1104 04:20:27.305607 2936 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:20:27.344285 kubelet[2936]: I1104 04:20:27.344130 2936 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:20:27.344285 kubelet[2936]: I1104 04:20:27.344278 2936 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:20:27.344714 kubelet[2936]: I1104 04:20:27.344526 2936 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:20:27.351598 kubelet[2936]: I1104 04:20:27.351534 2936 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 04:20:27.355179 kubelet[2936]: I1104 04:20:27.355008 2936 policy_none.go:49] "None policy: Start" Nov 4 04:20:27.355179 kubelet[2936]: I1104 04:20:27.355063 2936 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:20:27.355179 kubelet[2936]: I1104 04:20:27.355087 2936 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:20:27.355651 kubelet[2936]: I1104 04:20:27.355624 2936 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 04:20:27.355762 kubelet[2936]: I1104 04:20:27.355744 2936 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 04:20:27.357049 kubelet[2936]: I1104 04:20:27.357006 2936 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:20:27.358817 kubelet[2936]: I1104 04:20:27.357452 2936 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 04:20:27.358817 kubelet[2936]: E1104 04:20:27.357525 2936 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:20:27.360986 kubelet[2936]: E1104 04:20:27.360687 2936 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 04:20:27.371161 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 04:20:27.387774 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 04:20:27.395158 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 04:20:27.396742 kubelet[2936]: E1104 04:20:27.396244 2936 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-40\" not found" Nov 4 04:20:27.408275 kubelet[2936]: E1104 04:20:27.408225 2936 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:20:27.409384 kubelet[2936]: I1104 04:20:27.409204 2936 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:20:27.409600 kubelet[2936]: I1104 04:20:27.409549 2936 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:20:27.409999 kubelet[2936]: I1104 04:20:27.409962 2936 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:20:27.413542 kubelet[2936]: E1104 04:20:27.412894 2936 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:20:27.413542 kubelet[2936]: E1104 04:20:27.412987 2936 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-40\" not found" Nov 4 04:20:27.478844 systemd[1]: Created slice kubepods-burstable-pod49eeb7fdfaf5cb1bb5eb4fefe36f418c.slice - libcontainer container kubepods-burstable-pod49eeb7fdfaf5cb1bb5eb4fefe36f418c.slice. Nov 4 04:20:27.499386 kubelet[2936]: E1104 04:20:27.498866 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:27.501991 kubelet[2936]: E1104 04:20:27.501864 2936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-40?timeout=10s\": dial tcp 172.31.28.40:6443: connect: connection refused" interval="400ms" Nov 4 04:20:27.506883 systemd[1]: Created slice kubepods-burstable-podd1d5a477c93a45f7607fbe4ee59ce47a.slice - libcontainer container kubepods-burstable-podd1d5a477c93a45f7607fbe4ee59ce47a.slice. Nov 4 04:20:27.513330 kubelet[2936]: E1104 04:20:27.513268 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:27.515111 kubelet[2936]: I1104 04:20:27.515060 2936 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-40" Nov 4 04:20:27.516702 kubelet[2936]: E1104 04:20:27.515871 2936 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.40:6443/api/v1/nodes\": dial tcp 172.31.28.40:6443: connect: connection refused" node="ip-172-31-28-40" Nov 4 04:20:27.518274 systemd[1]: Created slice kubepods-burstable-pod2bfcadd3042b6bb7b351f20a53124bd8.slice - libcontainer container kubepods-burstable-pod2bfcadd3042b6bb7b351f20a53124bd8.slice. Nov 4 04:20:27.521823 kubelet[2936]: E1104 04:20:27.521781 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:27.599112 kubelet[2936]: I1104 04:20:27.599046 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:27.599112 kubelet[2936]: I1104 04:20:27.599112 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:27.599311 kubelet[2936]: I1104 04:20:27.599155 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:27.599311 kubelet[2936]: I1104 04:20:27.599198 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bfcadd3042b6bb7b351f20a53124bd8-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-40\" (UID: \"2bfcadd3042b6bb7b351f20a53124bd8\") " pod="kube-system/kube-scheduler-ip-172-31-28-40" Nov 4 04:20:27.599311 kubelet[2936]: I1104 04:20:27.599234 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49eeb7fdfaf5cb1bb5eb4fefe36f418c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-40\" (UID: \"49eeb7fdfaf5cb1bb5eb4fefe36f418c\") " pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:27.599311 kubelet[2936]: I1104 04:20:27.599266 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49eeb7fdfaf5cb1bb5eb4fefe36f418c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-40\" (UID: \"49eeb7fdfaf5cb1bb5eb4fefe36f418c\") " pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:27.599311 kubelet[2936]: I1104 04:20:27.599301 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49eeb7fdfaf5cb1bb5eb4fefe36f418c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-40\" (UID: \"49eeb7fdfaf5cb1bb5eb4fefe36f418c\") " pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:27.599593 kubelet[2936]: I1104 04:20:27.599365 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:27.599593 kubelet[2936]: I1104 04:20:27.599405 2936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:27.719486 kubelet[2936]: I1104 04:20:27.719358 2936 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-40" Nov 4 04:20:27.719967 kubelet[2936]: E1104 04:20:27.719901 2936 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.40:6443/api/v1/nodes\": dial tcp 172.31.28.40:6443: connect: connection refused" node="ip-172-31-28-40" Nov 4 04:20:27.800911 containerd[1975]: time="2025-11-04T04:20:27.800593559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-40,Uid:49eeb7fdfaf5cb1bb5eb4fefe36f418c,Namespace:kube-system,Attempt:0,}" Nov 4 04:20:27.818001 containerd[1975]: time="2025-11-04T04:20:27.817706531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-40,Uid:d1d5a477c93a45f7607fbe4ee59ce47a,Namespace:kube-system,Attempt:0,}" Nov 4 04:20:27.824104 containerd[1975]: time="2025-11-04T04:20:27.823720271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-40,Uid:2bfcadd3042b6bb7b351f20a53124bd8,Namespace:kube-system,Attempt:0,}" Nov 4 04:20:27.867121 containerd[1975]: time="2025-11-04T04:20:27.866931215Z" level=info msg="connecting to shim 7a50847b55a05f148e88f70055a9a724e130e001c88311d322ddd038724ddb41" address="unix:///run/containerd/s/6c1a7057c345db2162a318d518e27ccb393e280828c9e63a15caaa7cebf01668" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:20:27.903066 kubelet[2936]: E1104 04:20:27.902975 2936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-40?timeout=10s\": dial tcp 172.31.28.40:6443: connect: connection refused" interval="800ms" Nov 4 04:20:27.917743 containerd[1975]: time="2025-11-04T04:20:27.917666088Z" level=info msg="connecting to shim 0f4ab87a18c31946d7fe666265bc155962c0c2bb356a6cb0b402d8a7c0839d76" address="unix:///run/containerd/s/5f952bc2f511aeb42617d87a843a1fe11529f485cdcec7cfd53d7c1cd316be60" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:20:27.950655 containerd[1975]: time="2025-11-04T04:20:27.950255232Z" level=info msg="connecting to shim 4e1e2a212c7a9232725d67660562ce5cb20bd525bc637a77be68270cc62d94ee" address="unix:///run/containerd/s/de60f0290a06605075d8c84b21f14ac80034dfc2947521a6899c5c7e91cee264" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:20:27.963017 systemd[1]: Started cri-containerd-7a50847b55a05f148e88f70055a9a724e130e001c88311d322ddd038724ddb41.scope - libcontainer container 7a50847b55a05f148e88f70055a9a724e130e001c88311d322ddd038724ddb41. Nov 4 04:20:27.989632 systemd[1]: Started cri-containerd-0f4ab87a18c31946d7fe666265bc155962c0c2bb356a6cb0b402d8a7c0839d76.scope - libcontainer container 0f4ab87a18c31946d7fe666265bc155962c0c2bb356a6cb0b402d8a7c0839d76. Nov 4 04:20:28.041650 systemd[1]: Started cri-containerd-4e1e2a212c7a9232725d67660562ce5cb20bd525bc637a77be68270cc62d94ee.scope - libcontainer container 4e1e2a212c7a9232725d67660562ce5cb20bd525bc637a77be68270cc62d94ee. Nov 4 04:20:28.127545 kubelet[2936]: I1104 04:20:28.127045 2936 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-40" Nov 4 04:20:28.129242 kubelet[2936]: E1104 04:20:28.129078 2936 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.40:6443/api/v1/nodes\": dial tcp 172.31.28.40:6443: connect: connection refused" node="ip-172-31-28-40" Nov 4 04:20:28.138982 containerd[1975]: time="2025-11-04T04:20:28.138873105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-40,Uid:49eeb7fdfaf5cb1bb5eb4fefe36f418c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a50847b55a05f148e88f70055a9a724e130e001c88311d322ddd038724ddb41\"" Nov 4 04:20:28.148687 containerd[1975]: time="2025-11-04T04:20:28.148222413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-40,Uid:d1d5a477c93a45f7607fbe4ee59ce47a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f4ab87a18c31946d7fe666265bc155962c0c2bb356a6cb0b402d8a7c0839d76\"" Nov 4 04:20:28.158353 containerd[1975]: time="2025-11-04T04:20:28.158129685Z" level=info msg="CreateContainer within sandbox \"7a50847b55a05f148e88f70055a9a724e130e001c88311d322ddd038724ddb41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 04:20:28.162411 containerd[1975]: time="2025-11-04T04:20:28.162362397Z" level=info msg="CreateContainer within sandbox \"0f4ab87a18c31946d7fe666265bc155962c0c2bb356a6cb0b402d8a7c0839d76\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 04:20:28.176992 containerd[1975]: time="2025-11-04T04:20:28.176886381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-40,Uid:2bfcadd3042b6bb7b351f20a53124bd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e1e2a212c7a9232725d67660562ce5cb20bd525bc637a77be68270cc62d94ee\"" Nov 4 04:20:28.187462 containerd[1975]: time="2025-11-04T04:20:28.187298433Z" level=info msg="Container 6f8b69f53e78c6ca1258e2b739c14a0d844584504327dc4807dc99bd93eda75a: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:20:28.188886 kubelet[2936]: E1104 04:20:28.188812 2936 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 04:20:28.189858 containerd[1975]: time="2025-11-04T04:20:28.189144309Z" level=info msg="CreateContainer within sandbox \"4e1e2a212c7a9232725d67660562ce5cb20bd525bc637a77be68270cc62d94ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 04:20:28.195283 containerd[1975]: time="2025-11-04T04:20:28.195219969Z" level=info msg="Container 57c7aab09110b23a0c6622f2d9528f3b793479b01527eb94018a5b004f5c289d: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:20:28.209802 containerd[1975]: time="2025-11-04T04:20:28.209739393Z" level=info msg="CreateContainer within sandbox \"7a50847b55a05f148e88f70055a9a724e130e001c88311d322ddd038724ddb41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f8b69f53e78c6ca1258e2b739c14a0d844584504327dc4807dc99bd93eda75a\"" Nov 4 04:20:28.211438 containerd[1975]: time="2025-11-04T04:20:28.211377369Z" level=info msg="StartContainer for \"6f8b69f53e78c6ca1258e2b739c14a0d844584504327dc4807dc99bd93eda75a\"" Nov 4 04:20:28.214576 containerd[1975]: time="2025-11-04T04:20:28.214501605Z" level=info msg="connecting to shim 6f8b69f53e78c6ca1258e2b739c14a0d844584504327dc4807dc99bd93eda75a" address="unix:///run/containerd/s/6c1a7057c345db2162a318d518e27ccb393e280828c9e63a15caaa7cebf01668" protocol=ttrpc version=3 Nov 4 04:20:28.220052 containerd[1975]: time="2025-11-04T04:20:28.219940185Z" level=info msg="CreateContainer within sandbox \"0f4ab87a18c31946d7fe666265bc155962c0c2bb356a6cb0b402d8a7c0839d76\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57c7aab09110b23a0c6622f2d9528f3b793479b01527eb94018a5b004f5c289d\"" Nov 4 04:20:28.221663 containerd[1975]: time="2025-11-04T04:20:28.220965693Z" level=info msg="StartContainer for \"57c7aab09110b23a0c6622f2d9528f3b793479b01527eb94018a5b004f5c289d\"" Nov 4 04:20:28.224109 containerd[1975]: time="2025-11-04T04:20:28.224042445Z" level=info msg="connecting to shim 57c7aab09110b23a0c6622f2d9528f3b793479b01527eb94018a5b004f5c289d" address="unix:///run/containerd/s/5f952bc2f511aeb42617d87a843a1fe11529f485cdcec7cfd53d7c1cd316be60" protocol=ttrpc version=3 Nov 4 04:20:28.227749 containerd[1975]: time="2025-11-04T04:20:28.227685765Z" level=info msg="Container 885507a79aa5443fa11ea672074f26d5ec26f7e5f246ecd6fe8de011d8d0d017: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:20:28.250113 containerd[1975]: time="2025-11-04T04:20:28.250037133Z" level=info msg="CreateContainer within sandbox \"4e1e2a212c7a9232725d67660562ce5cb20bd525bc637a77be68270cc62d94ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"885507a79aa5443fa11ea672074f26d5ec26f7e5f246ecd6fe8de011d8d0d017\"" Nov 4 04:20:28.253185 containerd[1975]: time="2025-11-04T04:20:28.253127421Z" level=info msg="StartContainer for \"885507a79aa5443fa11ea672074f26d5ec26f7e5f246ecd6fe8de011d8d0d017\"" Nov 4 04:20:28.259978 kubelet[2936]: E1104 04:20:28.259866 2936 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-40&limit=500&resourceVersion=0\": dial tcp 172.31.28.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 04:20:28.263297 containerd[1975]: time="2025-11-04T04:20:28.262883793Z" level=info msg="connecting to shim 885507a79aa5443fa11ea672074f26d5ec26f7e5f246ecd6fe8de011d8d0d017" address="unix:///run/containerd/s/de60f0290a06605075d8c84b21f14ac80034dfc2947521a6899c5c7e91cee264" protocol=ttrpc version=3 Nov 4 04:20:28.262962 systemd[1]: Started cri-containerd-6f8b69f53e78c6ca1258e2b739c14a0d844584504327dc4807dc99bd93eda75a.scope - libcontainer container 6f8b69f53e78c6ca1258e2b739c14a0d844584504327dc4807dc99bd93eda75a. Nov 4 04:20:28.288528 systemd[1]: Started cri-containerd-57c7aab09110b23a0c6622f2d9528f3b793479b01527eb94018a5b004f5c289d.scope - libcontainer container 57c7aab09110b23a0c6622f2d9528f3b793479b01527eb94018a5b004f5c289d. Nov 4 04:20:28.330904 systemd[1]: Started cri-containerd-885507a79aa5443fa11ea672074f26d5ec26f7e5f246ecd6fe8de011d8d0d017.scope - libcontainer container 885507a79aa5443fa11ea672074f26d5ec26f7e5f246ecd6fe8de011d8d0d017. Nov 4 04:20:28.440007 containerd[1975]: time="2025-11-04T04:20:28.439941790Z" level=info msg="StartContainer for \"6f8b69f53e78c6ca1258e2b739c14a0d844584504327dc4807dc99bd93eda75a\" returns successfully" Nov 4 04:20:28.511177 containerd[1975]: time="2025-11-04T04:20:28.510918935Z" level=info msg="StartContainer for \"57c7aab09110b23a0c6622f2d9528f3b793479b01527eb94018a5b004f5c289d\" returns successfully" Nov 4 04:20:28.540039 containerd[1975]: time="2025-11-04T04:20:28.539977355Z" level=info msg="StartContainer for \"885507a79aa5443fa11ea672074f26d5ec26f7e5f246ecd6fe8de011d8d0d017\" returns successfully" Nov 4 04:20:28.931782 kubelet[2936]: I1104 04:20:28.931735 2936 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-40" Nov 4 04:20:29.431415 kubelet[2936]: E1104 04:20:29.431378 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:29.444065 kubelet[2936]: E1104 04:20:29.444019 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:29.461062 kubelet[2936]: E1104 04:20:29.461012 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:30.464699 kubelet[2936]: E1104 04:20:30.464654 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:30.465402 kubelet[2936]: E1104 04:20:30.465231 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:30.465402 kubelet[2936]: E1104 04:20:30.464305 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:31.465975 kubelet[2936]: E1104 04:20:31.465909 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:31.468788 kubelet[2936]: E1104 04:20:31.468740 2936 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:32.267852 kubelet[2936]: I1104 04:20:32.267791 2936 apiserver.go:52] "Watching apiserver" Nov 4 04:20:32.280259 kubelet[2936]: E1104 04:20:32.280197 2936 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-40\" not found" node="ip-172-31-28-40" Nov 4 04:20:32.298117 kubelet[2936]: I1104 04:20:32.298026 2936 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:20:32.381606 kubelet[2936]: I1104 04:20:32.380780 2936 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-40" Nov 4 04:20:32.399248 kubelet[2936]: I1104 04:20:32.399190 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-40" Nov 4 04:20:32.466421 kubelet[2936]: I1104 04:20:32.465951 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:32.532203 kubelet[2936]: E1104 04:20:32.531829 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-40" Nov 4 04:20:32.532203 kubelet[2936]: I1104 04:20:32.531873 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:32.533427 kubelet[2936]: E1104 04:20:32.533357 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:32.564184 kubelet[2936]: E1104 04:20:32.563887 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:32.564184 kubelet[2936]: I1104 04:20:32.563929 2936 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:32.587574 kubelet[2936]: E1104 04:20:32.587527 2936 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-40\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:33.337462 update_engine[1948]: I20251104 04:20:33.337378 1948 update_attempter.cc:509] Updating boot flags... Nov 4 04:20:34.891057 systemd[1]: Reload requested from client PID 3324 ('systemctl') (unit session-7.scope)... Nov 4 04:20:34.891703 systemd[1]: Reloading... Nov 4 04:20:35.091368 zram_generator::config[3372]: No configuration found. Nov 4 04:20:35.570887 systemd[1]: Reloading finished in 678 ms. Nov 4 04:20:35.638077 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:20:35.638769 kubelet[2936]: I1104 04:20:35.638378 2936 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:20:35.657055 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 04:20:35.657566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:35.657648 systemd[1]: kubelet.service: Consumed 2.003s CPU time, 128.2M memory peak. Nov 4 04:20:35.662548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:20:36.035376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:20:36.050213 (kubelet)[3429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:20:36.145080 kubelet[3429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:20:36.146243 kubelet[3429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:20:36.146243 kubelet[3429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:20:36.146243 kubelet[3429]: I1104 04:20:36.145404 3429 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:20:36.170356 kubelet[3429]: I1104 04:20:36.169895 3429 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 04:20:36.170356 kubelet[3429]: I1104 04:20:36.169945 3429 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:20:36.171046 kubelet[3429]: I1104 04:20:36.170668 3429 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:20:36.173588 kubelet[3429]: I1104 04:20:36.173547 3429 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 04:20:36.178274 kubelet[3429]: I1104 04:20:36.178231 3429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:20:36.188044 kubelet[3429]: I1104 04:20:36.187988 3429 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:20:36.197667 kubelet[3429]: I1104 04:20:36.197533 3429 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:20:36.198127 kubelet[3429]: I1104 04:20:36.198088 3429 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:20:36.198509 kubelet[3429]: I1104 04:20:36.198219 3429 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-40","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:20:36.198754 kubelet[3429]: I1104 04:20:36.198730 3429 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:20:36.198869 kubelet[3429]: I1104 04:20:36.198851 3429 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 04:20:36.199028 kubelet[3429]: I1104 04:20:36.199010 3429 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:20:36.199449 kubelet[3429]: I1104 04:20:36.199412 3429 kubelet.go:480] "Attempting to sync node with API server" Nov 4 04:20:36.200900 kubelet[3429]: I1104 04:20:36.200819 3429 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:20:36.201445 kubelet[3429]: I1104 04:20:36.201027 3429 kubelet.go:386] "Adding apiserver pod source" Nov 4 04:20:36.201445 kubelet[3429]: I1104 04:20:36.201061 3429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:20:36.216994 kubelet[3429]: I1104 04:20:36.216936 3429 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:20:36.218419 kubelet[3429]: I1104 04:20:36.218387 3429 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:20:36.247168 kubelet[3429]: I1104 04:20:36.247114 3429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:20:36.247426 kubelet[3429]: I1104 04:20:36.247407 3429 server.go:1289] "Started kubelet" Nov 4 04:20:36.253699 kubelet[3429]: I1104 04:20:36.252971 3429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:20:36.270591 kubelet[3429]: I1104 04:20:36.269333 3429 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:20:36.273858 kubelet[3429]: I1104 04:20:36.273822 3429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:20:36.275417 kubelet[3429]: I1104 04:20:36.270203 3429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:20:36.275542 kubelet[3429]: I1104 04:20:36.275490 3429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:20:36.275785 kubelet[3429]: I1104 04:20:36.275761 3429 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:20:36.277030 kubelet[3429]: I1104 04:20:36.270128 3429 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:20:36.280745 kubelet[3429]: I1104 04:20:36.279643 3429 server.go:317] "Adding debug handlers to kubelet server" Nov 4 04:20:36.284348 kubelet[3429]: I1104 04:20:36.283749 3429 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:20:36.289382 kubelet[3429]: I1104 04:20:36.288634 3429 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:20:36.290598 kubelet[3429]: I1104 04:20:36.289713 3429 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:20:36.299812 kubelet[3429]: I1104 04:20:36.299745 3429 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:20:36.315426 kubelet[3429]: I1104 04:20:36.315361 3429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 04:20:36.317730 kubelet[3429]: I1104 04:20:36.317388 3429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 04:20:36.317730 kubelet[3429]: I1104 04:20:36.317435 3429 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 04:20:36.317730 kubelet[3429]: I1104 04:20:36.317471 3429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:20:36.317730 kubelet[3429]: I1104 04:20:36.317484 3429 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 04:20:36.317730 kubelet[3429]: E1104 04:20:36.317554 3429 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:20:36.335656 kubelet[3429]: E1104 04:20:36.335248 3429 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:20:36.420188 kubelet[3429]: E1104 04:20:36.420118 3429 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 4 04:20:36.428538 kubelet[3429]: I1104 04:20:36.428492 3429 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:20:36.428689 kubelet[3429]: I1104 04:20:36.428527 3429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:20:36.428689 kubelet[3429]: I1104 04:20:36.428592 3429 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:20:36.429157 kubelet[3429]: I1104 04:20:36.429104 3429 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 04:20:36.429157 kubelet[3429]: I1104 04:20:36.429140 3429 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 04:20:36.429275 kubelet[3429]: I1104 04:20:36.429174 3429 policy_none.go:49] "None policy: Start" Nov 4 04:20:36.429275 kubelet[3429]: I1104 04:20:36.429195 3429 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:20:36.429275 kubelet[3429]: I1104 04:20:36.429216 3429 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:20:36.430084 kubelet[3429]: I1104 04:20:36.430037 3429 state_mem.go:75] "Updated machine memory state" Nov 4 04:20:36.446348 kubelet[3429]: E1104 04:20:36.445773 3429 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:20:36.446348 kubelet[3429]: I1104 04:20:36.446043 3429 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:20:36.446348 kubelet[3429]: I1104 04:20:36.446062 3429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:20:36.447245 kubelet[3429]: I1104 04:20:36.447220 3429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:20:36.451108 kubelet[3429]: E1104 04:20:36.451074 3429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:20:36.578169 kubelet[3429]: I1104 04:20:36.576499 3429 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-40" Nov 4 04:20:36.601666 kubelet[3429]: I1104 04:20:36.601611 3429 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-40" Nov 4 04:20:36.601847 kubelet[3429]: I1104 04:20:36.601730 3429 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-40" Nov 4 04:20:36.622947 kubelet[3429]: I1104 04:20:36.622880 3429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:36.625594 kubelet[3429]: I1104 04:20:36.624579 3429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:36.625594 kubelet[3429]: I1104 04:20:36.624847 3429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-40" Nov 4 04:20:36.688166 kubelet[3429]: I1104 04:20:36.688091 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49eeb7fdfaf5cb1bb5eb4fefe36f418c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-40\" (UID: \"49eeb7fdfaf5cb1bb5eb4fefe36f418c\") " pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:36.688166 kubelet[3429]: I1104 04:20:36.688160 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49eeb7fdfaf5cb1bb5eb4fefe36f418c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-40\" (UID: \"49eeb7fdfaf5cb1bb5eb4fefe36f418c\") " pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:36.688411 kubelet[3429]: I1104 04:20:36.688203 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:36.688411 kubelet[3429]: I1104 04:20:36.688243 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:36.688411 kubelet[3429]: I1104 04:20:36.688279 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bfcadd3042b6bb7b351f20a53124bd8-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-40\" (UID: \"2bfcadd3042b6bb7b351f20a53124bd8\") " pod="kube-system/kube-scheduler-ip-172-31-28-40" Nov 4 04:20:36.689577 kubelet[3429]: I1104 04:20:36.689484 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49eeb7fdfaf5cb1bb5eb4fefe36f418c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-40\" (UID: \"49eeb7fdfaf5cb1bb5eb4fefe36f418c\") " pod="kube-system/kube-apiserver-ip-172-31-28-40" Nov 4 04:20:36.689869 kubelet[3429]: I1104 04:20:36.689614 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:36.689869 kubelet[3429]: I1104 04:20:36.689662 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:36.689869 kubelet[3429]: I1104 04:20:36.689701 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1d5a477c93a45f7607fbe4ee59ce47a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-40\" (UID: \"d1d5a477c93a45f7607fbe4ee59ce47a\") " pod="kube-system/kube-controller-manager-ip-172-31-28-40" Nov 4 04:20:37.213218 kubelet[3429]: I1104 04:20:37.213161 3429 apiserver.go:52] "Watching apiserver" Nov 4 04:20:37.276701 kubelet[3429]: I1104 04:20:37.276612 3429 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:20:37.380849 kubelet[3429]: I1104 04:20:37.380799 3429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-40" Nov 4 04:20:37.394453 kubelet[3429]: E1104 04:20:37.394386 3429 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-40\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-40" Nov 4 04:20:37.483373 kubelet[3429]: I1104 04:20:37.480916 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-40" podStartSLOduration=1.480894259 podStartE2EDuration="1.480894259s" podCreationTimestamp="2025-11-04 04:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:20:37.446065927 +0000 UTC m=+1.387350476" watchObservedRunningTime="2025-11-04 04:20:37.480894259 +0000 UTC m=+1.422178796" Nov 4 04:20:37.526759 kubelet[3429]: I1104 04:20:37.526667 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-40" podStartSLOduration=1.526644307 podStartE2EDuration="1.526644307s" podCreationTimestamp="2025-11-04 04:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:20:37.480458983 +0000 UTC m=+1.421743532" watchObservedRunningTime="2025-11-04 04:20:37.526644307 +0000 UTC m=+1.467928856" Nov 4 04:20:37.557459 kubelet[3429]: I1104 04:20:37.557376 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-40" podStartSLOduration=1.5573531 podStartE2EDuration="1.5573531s" podCreationTimestamp="2025-11-04 04:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:20:37.529018963 +0000 UTC m=+1.470303512" watchObservedRunningTime="2025-11-04 04:20:37.5573531 +0000 UTC m=+1.498637649" Nov 4 04:20:41.588514 kubelet[3429]: I1104 04:20:41.588454 3429 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 04:20:41.590933 containerd[1975]: time="2025-11-04T04:20:41.590715156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 04:20:41.593022 kubelet[3429]: I1104 04:20:41.592804 3429 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 04:20:42.494424 systemd[1]: Created slice kubepods-besteffort-pod14473be2_1367_4557_9ce1_e0b763bfaee3.slice - libcontainer container kubepods-besteffort-pod14473be2_1367_4557_9ce1_e0b763bfaee3.slice. Nov 4 04:20:42.530636 kubelet[3429]: I1104 04:20:42.530503 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14473be2-1367-4557-9ce1-e0b763bfaee3-xtables-lock\") pod \"kube-proxy-82ssx\" (UID: \"14473be2-1367-4557-9ce1-e0b763bfaee3\") " pod="kube-system/kube-proxy-82ssx" Nov 4 04:20:42.530636 kubelet[3429]: I1104 04:20:42.530573 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14473be2-1367-4557-9ce1-e0b763bfaee3-lib-modules\") pod \"kube-proxy-82ssx\" (UID: \"14473be2-1367-4557-9ce1-e0b763bfaee3\") " pod="kube-system/kube-proxy-82ssx" Nov 4 04:20:42.531453 kubelet[3429]: I1104 04:20:42.530702 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpm8q\" (UniqueName: \"kubernetes.io/projected/14473be2-1367-4557-9ce1-e0b763bfaee3-kube-api-access-rpm8q\") pod \"kube-proxy-82ssx\" (UID: \"14473be2-1367-4557-9ce1-e0b763bfaee3\") " pod="kube-system/kube-proxy-82ssx" Nov 4 04:20:42.531453 kubelet[3429]: I1104 04:20:42.530804 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14473be2-1367-4557-9ce1-e0b763bfaee3-kube-proxy\") pod \"kube-proxy-82ssx\" (UID: \"14473be2-1367-4557-9ce1-e0b763bfaee3\") " pod="kube-system/kube-proxy-82ssx" Nov 4 04:20:42.602539 systemd[1]: Created slice kubepods-besteffort-pod90372ca8_7303_4f3e_9260_d35450b34fcd.slice - libcontainer container kubepods-besteffort-pod90372ca8_7303_4f3e_9260_d35450b34fcd.slice. Nov 4 04:20:42.632034 kubelet[3429]: I1104 04:20:42.631927 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/90372ca8-7303-4f3e-9260-d35450b34fcd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-x82p2\" (UID: \"90372ca8-7303-4f3e-9260-d35450b34fcd\") " pod="tigera-operator/tigera-operator-7dcd859c48-x82p2" Nov 4 04:20:42.632631 kubelet[3429]: I1104 04:20:42.632140 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jds\" (UniqueName: \"kubernetes.io/projected/90372ca8-7303-4f3e-9260-d35450b34fcd-kube-api-access-l9jds\") pod \"tigera-operator-7dcd859c48-x82p2\" (UID: \"90372ca8-7303-4f3e-9260-d35450b34fcd\") " pod="tigera-operator/tigera-operator-7dcd859c48-x82p2" Nov 4 04:20:42.807800 containerd[1975]: time="2025-11-04T04:20:42.807566450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82ssx,Uid:14473be2-1367-4557-9ce1-e0b763bfaee3,Namespace:kube-system,Attempt:0,}" Nov 4 04:20:42.858906 containerd[1975]: time="2025-11-04T04:20:42.858666698Z" level=info msg="connecting to shim dbb98ffd5435639a39daed7fdc2ce00ee84fb42f0e47a123238f1a2e05497b18" address="unix:///run/containerd/s/e5da8e12ca9179c1c88b8075b481d41ade69ec439c38bbbf96b9dcb46876b4f8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:20:42.902684 systemd[1]: Started cri-containerd-dbb98ffd5435639a39daed7fdc2ce00ee84fb42f0e47a123238f1a2e05497b18.scope - libcontainer container dbb98ffd5435639a39daed7fdc2ce00ee84fb42f0e47a123238f1a2e05497b18. Nov 4 04:20:42.912365 containerd[1975]: time="2025-11-04T04:20:42.912019814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-x82p2,Uid:90372ca8-7303-4f3e-9260-d35450b34fcd,Namespace:tigera-operator,Attempt:0,}" Nov 4 04:20:42.968008 containerd[1975]: time="2025-11-04T04:20:42.967939586Z" level=info msg="connecting to shim 1797daf209c27cd1b915301f8b30b59224252c0e1e9275e430f5a89f0ce55278" address="unix:///run/containerd/s/38e8cebd1f22fbae54305ff90bfcd5a453230b6f6670b6dd5e4044de1e0f88b4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:20:42.985895 containerd[1975]: time="2025-11-04T04:20:42.985840995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82ssx,Uid:14473be2-1367-4557-9ce1-e0b763bfaee3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbb98ffd5435639a39daed7fdc2ce00ee84fb42f0e47a123238f1a2e05497b18\"" Nov 4 04:20:42.996535 containerd[1975]: time="2025-11-04T04:20:42.996473127Z" level=info msg="CreateContainer within sandbox \"dbb98ffd5435639a39daed7fdc2ce00ee84fb42f0e47a123238f1a2e05497b18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 04:20:43.022379 containerd[1975]: time="2025-11-04T04:20:43.022264475Z" level=info msg="Container 424ecf4af68388d831062ec0681a472b2641e321991515966eaa618d04e5186b: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:20:43.024453 systemd[1]: Started cri-containerd-1797daf209c27cd1b915301f8b30b59224252c0e1e9275e430f5a89f0ce55278.scope - libcontainer container 1797daf209c27cd1b915301f8b30b59224252c0e1e9275e430f5a89f0ce55278. Nov 4 04:20:43.046461 containerd[1975]: time="2025-11-04T04:20:43.046399859Z" level=info msg="CreateContainer within sandbox \"dbb98ffd5435639a39daed7fdc2ce00ee84fb42f0e47a123238f1a2e05497b18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"424ecf4af68388d831062ec0681a472b2641e321991515966eaa618d04e5186b\"" Nov 4 04:20:43.049034 containerd[1975]: time="2025-11-04T04:20:43.048976631Z" level=info msg="StartContainer for \"424ecf4af68388d831062ec0681a472b2641e321991515966eaa618d04e5186b\"" Nov 4 04:20:43.052380 containerd[1975]: time="2025-11-04T04:20:43.052286435Z" level=info msg="connecting to shim 424ecf4af68388d831062ec0681a472b2641e321991515966eaa618d04e5186b" address="unix:///run/containerd/s/e5da8e12ca9179c1c88b8075b481d41ade69ec439c38bbbf96b9dcb46876b4f8" protocol=ttrpc version=3 Nov 4 04:20:43.100742 systemd[1]: Started cri-containerd-424ecf4af68388d831062ec0681a472b2641e321991515966eaa618d04e5186b.scope - libcontainer container 424ecf4af68388d831062ec0681a472b2641e321991515966eaa618d04e5186b. Nov 4 04:20:43.129896 containerd[1975]: time="2025-11-04T04:20:43.129774059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-x82p2,Uid:90372ca8-7303-4f3e-9260-d35450b34fcd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1797daf209c27cd1b915301f8b30b59224252c0e1e9275e430f5a89f0ce55278\"" Nov 4 04:20:43.134911 containerd[1975]: time="2025-11-04T04:20:43.134835851Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 04:20:43.208936 containerd[1975]: time="2025-11-04T04:20:43.208766880Z" level=info msg="StartContainer for \"424ecf4af68388d831062ec0681a472b2641e321991515966eaa618d04e5186b\" returns successfully" Nov 4 04:20:43.430862 kubelet[3429]: I1104 04:20:43.430292 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-82ssx" podStartSLOduration=1.430272169 podStartE2EDuration="1.430272169s" podCreationTimestamp="2025-11-04 04:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:20:43.428694373 +0000 UTC m=+7.369978922" watchObservedRunningTime="2025-11-04 04:20:43.430272169 +0000 UTC m=+7.371556718" Nov 4 04:20:44.566186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8796402.mount: Deactivated successfully. Nov 4 04:20:45.701255 containerd[1975]: time="2025-11-04T04:20:45.700013800Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:45.702049 containerd[1975]: time="2025-11-04T04:20:45.701987044Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Nov 4 04:20:45.704408 containerd[1975]: time="2025-11-04T04:20:45.704364016Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:45.709955 containerd[1975]: time="2025-11-04T04:20:45.709907452Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:20:45.711776 containerd[1975]: time="2025-11-04T04:20:45.711673564Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.576771317s" Nov 4 04:20:45.711993 containerd[1975]: time="2025-11-04T04:20:45.711962032Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 4 04:20:45.719563 containerd[1975]: time="2025-11-04T04:20:45.719486068Z" level=info msg="CreateContainer within sandbox \"1797daf209c27cd1b915301f8b30b59224252c0e1e9275e430f5a89f0ce55278\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 04:20:45.737356 containerd[1975]: time="2025-11-04T04:20:45.736477672Z" level=info msg="Container 54802e43f681b104394bfe2cb1bec0682323924fe75895791a9b8d9bb1054710: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:20:45.742427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288404398.mount: Deactivated successfully. Nov 4 04:20:45.752211 containerd[1975]: time="2025-11-04T04:20:45.752132272Z" level=info msg="CreateContainer within sandbox \"1797daf209c27cd1b915301f8b30b59224252c0e1e9275e430f5a89f0ce55278\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54802e43f681b104394bfe2cb1bec0682323924fe75895791a9b8d9bb1054710\"" Nov 4 04:20:45.753401 containerd[1975]: time="2025-11-04T04:20:45.753296896Z" level=info msg="StartContainer for \"54802e43f681b104394bfe2cb1bec0682323924fe75895791a9b8d9bb1054710\"" Nov 4 04:20:45.757443 containerd[1975]: time="2025-11-04T04:20:45.757379356Z" level=info msg="connecting to shim 54802e43f681b104394bfe2cb1bec0682323924fe75895791a9b8d9bb1054710" address="unix:///run/containerd/s/38e8cebd1f22fbae54305ff90bfcd5a453230b6f6670b6dd5e4044de1e0f88b4" protocol=ttrpc version=3 Nov 4 04:20:45.800656 systemd[1]: Started cri-containerd-54802e43f681b104394bfe2cb1bec0682323924fe75895791a9b8d9bb1054710.scope - libcontainer container 54802e43f681b104394bfe2cb1bec0682323924fe75895791a9b8d9bb1054710. Nov 4 04:20:45.876914 containerd[1975]: time="2025-11-04T04:20:45.876857729Z" level=info msg="StartContainer for \"54802e43f681b104394bfe2cb1bec0682323924fe75895791a9b8d9bb1054710\" returns successfully" Nov 4 04:20:46.437977 kubelet[3429]: I1104 04:20:46.437816 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-x82p2" podStartSLOduration=1.8587389989999998 podStartE2EDuration="4.437791024s" podCreationTimestamp="2025-11-04 04:20:42 +0000 UTC" firstStartedPulling="2025-11-04 04:20:43.133904615 +0000 UTC m=+7.075189140" lastFinishedPulling="2025-11-04 04:20:45.71295664 +0000 UTC m=+9.654241165" observedRunningTime="2025-11-04 04:20:46.437253904 +0000 UTC m=+10.378538453" watchObservedRunningTime="2025-11-04 04:20:46.437791024 +0000 UTC m=+10.379075561" Nov 4 04:20:52.731438 sudo[2340]: pam_unix(sudo:session): session closed for user root Nov 4 04:20:52.758723 sshd[2339]: Connection closed by 147.75.109.163 port 33936 Nov 4 04:20:52.760429 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Nov 4 04:20:52.776675 systemd[1]: sshd@6-172.31.28.40:22-147.75.109.163:33936.service: Deactivated successfully. Nov 4 04:20:52.785092 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 04:20:52.786722 systemd[1]: session-7.scope: Consumed 10.676s CPU time, 223.8M memory peak. Nov 4 04:20:52.791863 systemd-logind[1946]: Session 7 logged out. Waiting for processes to exit. Nov 4 04:20:52.799244 systemd-logind[1946]: Removed session 7. Nov 4 04:21:18.035007 systemd[1]: Created slice kubepods-besteffort-pod1222e635_1e71_4c68_94e2_f57733d67205.slice - libcontainer container kubepods-besteffort-pod1222e635_1e71_4c68_94e2_f57733d67205.slice. Nov 4 04:21:18.077094 kubelet[3429]: I1104 04:21:18.077027 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1222e635-1e71-4c68-94e2-f57733d67205-typha-certs\") pod \"calico-typha-657b47dfbc-fqrs8\" (UID: \"1222e635-1e71-4c68-94e2-f57733d67205\") " pod="calico-system/calico-typha-657b47dfbc-fqrs8" Nov 4 04:21:18.077935 kubelet[3429]: I1104 04:21:18.077109 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1222e635-1e71-4c68-94e2-f57733d67205-tigera-ca-bundle\") pod \"calico-typha-657b47dfbc-fqrs8\" (UID: \"1222e635-1e71-4c68-94e2-f57733d67205\") " pod="calico-system/calico-typha-657b47dfbc-fqrs8" Nov 4 04:21:18.077935 kubelet[3429]: I1104 04:21:18.077153 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjd5\" (UniqueName: \"kubernetes.io/projected/1222e635-1e71-4c68-94e2-f57733d67205-kube-api-access-npjd5\") pod \"calico-typha-657b47dfbc-fqrs8\" (UID: \"1222e635-1e71-4c68-94e2-f57733d67205\") " pod="calico-system/calico-typha-657b47dfbc-fqrs8" Nov 4 04:21:18.259153 systemd[1]: Created slice kubepods-besteffort-pod00481659_2154_471a_85d3_956bc083e0e6.slice - libcontainer container kubepods-besteffort-pod00481659_2154_471a_85d3_956bc083e0e6.slice. Nov 4 04:21:18.279456 kubelet[3429]: I1104 04:21:18.279406 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-var-run-calico\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.279703 kubelet[3429]: I1104 04:21:18.279675 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf8m8\" (UniqueName: \"kubernetes.io/projected/00481659-2154-471a-85d3-956bc083e0e6-kube-api-access-bf8m8\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.279847 kubelet[3429]: I1104 04:21:18.279821 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-cni-bin-dir\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.280001 kubelet[3429]: I1104 04:21:18.279977 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-cni-net-dir\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.280142 kubelet[3429]: I1104 04:21:18.280116 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-lib-modules\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.280294 kubelet[3429]: I1104 04:21:18.280270 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-xtables-lock\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.280455 kubelet[3429]: I1104 04:21:18.280432 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-policysync\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.280595 kubelet[3429]: I1104 04:21:18.280572 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-cni-log-dir\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.281899 kubelet[3429]: I1104 04:21:18.281861 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00481659-2154-471a-85d3-956bc083e0e6-node-certs\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.282109 kubelet[3429]: I1104 04:21:18.282085 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00481659-2154-471a-85d3-956bc083e0e6-tigera-ca-bundle\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.282271 kubelet[3429]: I1104 04:21:18.282239 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-var-lib-calico\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.282473 kubelet[3429]: I1104 04:21:18.282422 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00481659-2154-471a-85d3-956bc083e0e6-flexvol-driver-host\") pod \"calico-node-hbvcf\" (UID: \"00481659-2154-471a-85d3-956bc083e0e6\") " pod="calico-system/calico-node-hbvcf" Nov 4 04:21:18.346293 containerd[1975]: time="2025-11-04T04:21:18.344378038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-657b47dfbc-fqrs8,Uid:1222e635-1e71-4c68-94e2-f57733d67205,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:18.378860 kubelet[3429]: E1104 04:21:18.377347 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:18.393791 kubelet[3429]: E1104 04:21:18.393627 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.393791 kubelet[3429]: W1104 04:21:18.393664 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.393791 kubelet[3429]: E1104 04:21:18.393709 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.426387 kubelet[3429]: E1104 04:21:18.424078 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.426387 kubelet[3429]: W1104 04:21:18.424119 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.426387 kubelet[3429]: E1104 04:21:18.424159 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.441246 kubelet[3429]: E1104 04:21:18.441185 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.441676 kubelet[3429]: W1104 04:21:18.441395 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.441676 kubelet[3429]: E1104 04:21:18.441431 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.443859 kubelet[3429]: E1104 04:21:18.443745 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.444282 kubelet[3429]: W1104 04:21:18.444087 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.444282 kubelet[3429]: E1104 04:21:18.444180 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.444854 kubelet[3429]: E1104 04:21:18.444821 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.445086 kubelet[3429]: W1104 04:21:18.444966 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.445086 kubelet[3429]: E1104 04:21:18.445002 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.445629 kubelet[3429]: E1104 04:21:18.445483 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.445629 kubelet[3429]: W1104 04:21:18.445507 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.445629 kubelet[3429]: E1104 04:21:18.445530 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.446123 kubelet[3429]: E1104 04:21:18.446096 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.446371 kubelet[3429]: W1104 04:21:18.446219 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.446371 kubelet[3429]: E1104 04:21:18.446252 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.446809 kubelet[3429]: E1104 04:21:18.446786 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.447024 kubelet[3429]: W1104 04:21:18.446914 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.447024 kubelet[3429]: E1104 04:21:18.446946 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.448668 kubelet[3429]: E1104 04:21:18.448634 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.448980 kubelet[3429]: W1104 04:21:18.448847 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.448980 kubelet[3429]: E1104 04:21:18.448885 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.449423 kubelet[3429]: E1104 04:21:18.449397 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.449687 kubelet[3429]: W1104 04:21:18.449552 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.449687 kubelet[3429]: E1104 04:21:18.449586 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.450347 kubelet[3429]: E1104 04:21:18.450146 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.450347 kubelet[3429]: W1104 04:21:18.450175 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.450347 kubelet[3429]: E1104 04:21:18.450202 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.450877 kubelet[3429]: E1104 04:21:18.450847 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.451076 kubelet[3429]: W1104 04:21:18.450963 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.451076 kubelet[3429]: E1104 04:21:18.450996 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.452223 kubelet[3429]: E1104 04:21:18.452070 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.452223 kubelet[3429]: W1104 04:21:18.452100 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.452223 kubelet[3429]: E1104 04:21:18.452129 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.453010 kubelet[3429]: E1104 04:21:18.452974 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.453474 kubelet[3429]: W1104 04:21:18.453156 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.453474 kubelet[3429]: E1104 04:21:18.453203 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.456754 containerd[1975]: time="2025-11-04T04:21:18.456675203Z" level=info msg="connecting to shim f193708d508789cf41574de9f82e815389a821eb3e5decc6bec424177a90939a" address="unix:///run/containerd/s/6c1eebf7b0df00013ab1208cb64352ad4f12cd98428bacc08894dcb56f518ff2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:18.457291 kubelet[3429]: E1104 04:21:18.457118 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.457291 kubelet[3429]: W1104 04:21:18.457150 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.457291 kubelet[3429]: E1104 04:21:18.457200 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.459466 kubelet[3429]: E1104 04:21:18.459396 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.459865 kubelet[3429]: W1104 04:21:18.459558 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.459865 kubelet[3429]: E1104 04:21:18.459595 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.460613 kubelet[3429]: E1104 04:21:18.460433 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.460613 kubelet[3429]: W1104 04:21:18.460487 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.460613 kubelet[3429]: E1104 04:21:18.460514 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.461140 kubelet[3429]: E1104 04:21:18.461115 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.461389 kubelet[3429]: W1104 04:21:18.461281 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.461584 kubelet[3429]: E1104 04:21:18.461355 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.461970 kubelet[3429]: E1104 04:21:18.461943 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.462294 kubelet[3429]: W1104 04:21:18.462163 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.462294 kubelet[3429]: E1104 04:21:18.462202 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.462758 kubelet[3429]: E1104 04:21:18.462734 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.463017 kubelet[3429]: W1104 04:21:18.462892 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.463017 kubelet[3429]: E1104 04:21:18.462930 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.464713 kubelet[3429]: E1104 04:21:18.464680 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.465018 kubelet[3429]: W1104 04:21:18.464882 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.465018 kubelet[3429]: E1104 04:21:18.464921 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.465663 kubelet[3429]: E1104 04:21:18.465495 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.465663 kubelet[3429]: W1104 04:21:18.465524 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.465663 kubelet[3429]: E1104 04:21:18.465551 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.487922 kubelet[3429]: E1104 04:21:18.487884 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.489496 kubelet[3429]: W1104 04:21:18.489386 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.489496 kubelet[3429]: E1104 04:21:18.489451 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.490015 kubelet[3429]: I1104 04:21:18.489739 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27fda10a-3169-4bf6-a620-503cc9dcb069-kubelet-dir\") pod \"csi-node-driver-bjdkx\" (UID: \"27fda10a-3169-4bf6-a620-503cc9dcb069\") " pod="calico-system/csi-node-driver-bjdkx" Nov 4 04:21:18.493354 kubelet[3429]: E1104 04:21:18.491530 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.495837 kubelet[3429]: W1104 04:21:18.493567 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.496144 kubelet[3429]: E1104 04:21:18.496100 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.496496 kubelet[3429]: I1104 04:21:18.496367 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27fda10a-3169-4bf6-a620-503cc9dcb069-socket-dir\") pod \"csi-node-driver-bjdkx\" (UID: \"27fda10a-3169-4bf6-a620-503cc9dcb069\") " pod="calico-system/csi-node-driver-bjdkx" Nov 4 04:21:18.497753 kubelet[3429]: E1104 04:21:18.497712 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.498202 kubelet[3429]: W1104 04:21:18.497946 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.498202 kubelet[3429]: E1104 04:21:18.497990 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.499418 kubelet[3429]: I1104 04:21:18.498344 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27fda10a-3169-4bf6-a620-503cc9dcb069-varrun\") pod \"csi-node-driver-bjdkx\" (UID: \"27fda10a-3169-4bf6-a620-503cc9dcb069\") " pod="calico-system/csi-node-driver-bjdkx" Nov 4 04:21:18.499663 kubelet[3429]: E1104 04:21:18.499632 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.499928 kubelet[3429]: W1104 04:21:18.499849 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.499928 kubelet[3429]: E1104 04:21:18.499893 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.500690 kubelet[3429]: E1104 04:21:18.500659 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.500997 kubelet[3429]: W1104 04:21:18.500846 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.500997 kubelet[3429]: E1104 04:21:18.500885 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.501599 kubelet[3429]: E1104 04:21:18.501569 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.501797 kubelet[3429]: W1104 04:21:18.501726 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.501797 kubelet[3429]: E1104 04:21:18.501763 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.502851 kubelet[3429]: E1104 04:21:18.502815 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.503176 kubelet[3429]: W1104 04:21:18.503016 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.503176 kubelet[3429]: E1104 04:21:18.503057 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.503542 kubelet[3429]: I1104 04:21:18.503496 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7p6p\" (UniqueName: \"kubernetes.io/projected/27fda10a-3169-4bf6-a620-503cc9dcb069-kube-api-access-n7p6p\") pod \"csi-node-driver-bjdkx\" (UID: \"27fda10a-3169-4bf6-a620-503cc9dcb069\") " pod="calico-system/csi-node-driver-bjdkx" Nov 4 04:21:18.504622 kubelet[3429]: E1104 04:21:18.504518 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.504622 kubelet[3429]: W1104 04:21:18.504553 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.504622 kubelet[3429]: E1104 04:21:18.504586 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.505497 kubelet[3429]: E1104 04:21:18.505400 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.505497 kubelet[3429]: W1104 04:21:18.505432 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.505497 kubelet[3429]: E1104 04:21:18.505463 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.506533 kubelet[3429]: E1104 04:21:18.506484 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.507028 kubelet[3429]: W1104 04:21:18.506692 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.507028 kubelet[3429]: E1104 04:21:18.506734 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.509154 kubelet[3429]: I1104 04:21:18.507575 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27fda10a-3169-4bf6-a620-503cc9dcb069-registration-dir\") pod \"csi-node-driver-bjdkx\" (UID: \"27fda10a-3169-4bf6-a620-503cc9dcb069\") " pod="calico-system/csi-node-driver-bjdkx" Nov 4 04:21:18.509544 kubelet[3429]: E1104 04:21:18.509512 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.509680 kubelet[3429]: W1104 04:21:18.509651 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.509799 kubelet[3429]: E1104 04:21:18.509776 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.510446 kubelet[3429]: E1104 04:21:18.510408 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.510762 kubelet[3429]: W1104 04:21:18.510616 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.510762 kubelet[3429]: E1104 04:21:18.510655 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.511871 kubelet[3429]: E1104 04:21:18.511818 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.511871 kubelet[3429]: W1104 04:21:18.511867 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.512134 kubelet[3429]: E1104 04:21:18.511904 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.514605 kubelet[3429]: E1104 04:21:18.514539 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.514605 kubelet[3429]: W1104 04:21:18.514578 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.515047 kubelet[3429]: E1104 04:21:18.514614 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.518417 kubelet[3429]: E1104 04:21:18.517832 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.518417 kubelet[3429]: W1104 04:21:18.517873 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.518417 kubelet[3429]: E1104 04:21:18.517907 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.525881 kubelet[3429]: E1104 04:21:18.525846 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.526166 kubelet[3429]: W1104 04:21:18.526140 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.527387 kubelet[3429]: E1104 04:21:18.526398 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.572117 containerd[1975]: time="2025-11-04T04:21:18.571853567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hbvcf,Uid:00481659-2154-471a-85d3-956bc083e0e6,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:18.575744 systemd[1]: Started cri-containerd-f193708d508789cf41574de9f82e815389a821eb3e5decc6bec424177a90939a.scope - libcontainer container f193708d508789cf41574de9f82e815389a821eb3e5decc6bec424177a90939a. Nov 4 04:21:18.612358 kubelet[3429]: E1104 04:21:18.611978 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.612358 kubelet[3429]: W1104 04:21:18.612011 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.612358 kubelet[3429]: E1104 04:21:18.612041 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.613932 kubelet[3429]: E1104 04:21:18.613904 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.614278 kubelet[3429]: W1104 04:21:18.614150 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.614278 kubelet[3429]: E1104 04:21:18.614185 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.614765 kubelet[3429]: E1104 04:21:18.614703 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.614765 kubelet[3429]: W1104 04:21:18.614737 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.615054 kubelet[3429]: E1104 04:21:18.614788 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.617380 kubelet[3429]: E1104 04:21:18.616354 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.617380 kubelet[3429]: W1104 04:21:18.616390 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.617380 kubelet[3429]: E1104 04:21:18.616443 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.619585 kubelet[3429]: E1104 04:21:18.619533 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.619993 kubelet[3429]: W1104 04:21:18.619576 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.619993 kubelet[3429]: E1104 04:21:18.619640 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.620175 kubelet[3429]: E1104 04:21:18.620133 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.620175 kubelet[3429]: W1104 04:21:18.620153 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.620485 kubelet[3429]: E1104 04:21:18.620174 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.620630 kubelet[3429]: E1104 04:21:18.620536 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.620630 kubelet[3429]: W1104 04:21:18.620553 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.620630 kubelet[3429]: E1104 04:21:18.620573 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.622275 kubelet[3429]: E1104 04:21:18.622173 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.622275 kubelet[3429]: W1104 04:21:18.622214 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.622275 kubelet[3429]: E1104 04:21:18.622252 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.623518 kubelet[3429]: E1104 04:21:18.622742 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.623518 kubelet[3429]: W1104 04:21:18.622763 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.623518 kubelet[3429]: E1104 04:21:18.622786 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.623518 kubelet[3429]: E1104 04:21:18.623161 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.623518 kubelet[3429]: W1104 04:21:18.623181 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.623518 kubelet[3429]: E1104 04:21:18.623204 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.624643 kubelet[3429]: E1104 04:21:18.624477 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.624643 kubelet[3429]: W1104 04:21:18.624510 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.624643 kubelet[3429]: E1104 04:21:18.624541 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.625881 kubelet[3429]: E1104 04:21:18.625608 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.625881 kubelet[3429]: W1104 04:21:18.625641 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.625881 kubelet[3429]: E1104 04:21:18.625672 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.626409 kubelet[3429]: E1104 04:21:18.626367 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.626674 kubelet[3429]: W1104 04:21:18.626498 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.626674 kubelet[3429]: E1104 04:21:18.626557 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.628441 kubelet[3429]: E1104 04:21:18.627547 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.628921 kubelet[3429]: W1104 04:21:18.628617 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.628921 kubelet[3429]: E1104 04:21:18.628670 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.629229 kubelet[3429]: E1104 04:21:18.629205 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.629690 kubelet[3429]: W1104 04:21:18.629310 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.629690 kubelet[3429]: E1104 04:21:18.629387 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.630214 kubelet[3429]: E1104 04:21:18.629999 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.630214 kubelet[3429]: W1104 04:21:18.630070 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.630214 kubelet[3429]: E1104 04:21:18.630101 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.630896 kubelet[3429]: E1104 04:21:18.630866 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.631096 kubelet[3429]: W1104 04:21:18.631069 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.631213 kubelet[3429]: E1104 04:21:18.631190 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.633289 kubelet[3429]: E1104 04:21:18.632865 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.633289 kubelet[3429]: W1104 04:21:18.632986 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.633289 kubelet[3429]: E1104 04:21:18.633022 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.638682 kubelet[3429]: E1104 04:21:18.637965 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.638682 kubelet[3429]: W1104 04:21:18.638000 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.638682 kubelet[3429]: E1104 04:21:18.638033 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.640664 kubelet[3429]: E1104 04:21:18.640472 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.642550 kubelet[3429]: W1104 04:21:18.642484 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.643716 kubelet[3429]: E1104 04:21:18.643172 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.645378 kubelet[3429]: E1104 04:21:18.645293 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.646600 kubelet[3429]: W1104 04:21:18.645465 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.646600 kubelet[3429]: E1104 04:21:18.645502 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.647504 kubelet[3429]: E1104 04:21:18.647386 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.648497 kubelet[3429]: W1104 04:21:18.648401 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.648497 kubelet[3429]: E1104 04:21:18.648496 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.649338 kubelet[3429]: E1104 04:21:18.649238 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.649706 kubelet[3429]: W1104 04:21:18.649304 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.649706 kubelet[3429]: E1104 04:21:18.649374 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.651129 kubelet[3429]: E1104 04:21:18.651076 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.652676 kubelet[3429]: W1104 04:21:18.652487 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.652676 kubelet[3429]: E1104 04:21:18.652677 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.654708 kubelet[3429]: E1104 04:21:18.654652 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.654857 kubelet[3429]: W1104 04:21:18.654721 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.654967 kubelet[3429]: E1104 04:21:18.654759 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.670354 containerd[1975]: time="2025-11-04T04:21:18.669656760Z" level=info msg="connecting to shim 76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe" address="unix:///run/containerd/s/18714f3ff3babd0989b1151c89d30d991ccdb9e1c75b0e8ff084a8dbaa0d308a" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:18.690485 kubelet[3429]: E1104 04:21:18.689719 3429 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 04:21:18.690485 kubelet[3429]: W1104 04:21:18.690478 3429 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 04:21:18.690718 kubelet[3429]: E1104 04:21:18.690558 3429 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 04:21:18.752669 systemd[1]: Started cri-containerd-76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe.scope - libcontainer container 76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe. Nov 4 04:21:18.920733 containerd[1975]: time="2025-11-04T04:21:18.920671885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hbvcf,Uid:00481659-2154-471a-85d3-956bc083e0e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe\"" Nov 4 04:21:18.925255 containerd[1975]: time="2025-11-04T04:21:18.925194973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 04:21:18.970070 containerd[1975]: time="2025-11-04T04:21:18.970013617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-657b47dfbc-fqrs8,Uid:1222e635-1e71-4c68-94e2-f57733d67205,Namespace:calico-system,Attempt:0,} returns sandbox id \"f193708d508789cf41574de9f82e815389a821eb3e5decc6bec424177a90939a\"" Nov 4 04:21:20.107092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881478930.mount: Deactivated successfully. Nov 4 04:21:20.288044 containerd[1975]: time="2025-11-04T04:21:20.287860596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:20.290471 containerd[1975]: time="2025-11-04T04:21:20.290400588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=2517" Nov 4 04:21:20.292593 containerd[1975]: time="2025-11-04T04:21:20.292527252Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:20.298439 containerd[1975]: time="2025-11-04T04:21:20.298356204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:20.300780 containerd[1975]: time="2025-11-04T04:21:20.299454948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.374197179s" Nov 4 04:21:20.300780 containerd[1975]: time="2025-11-04T04:21:20.299530032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 4 04:21:20.300982 containerd[1975]: time="2025-11-04T04:21:20.300848304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 04:21:20.310270 containerd[1975]: time="2025-11-04T04:21:20.310220544Z" level=info msg="CreateContainer within sandbox \"76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 04:21:20.318790 kubelet[3429]: E1104 04:21:20.318716 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:20.332303 containerd[1975]: time="2025-11-04T04:21:20.332237124Z" level=info msg="Container 27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:21:20.345608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217523359.mount: Deactivated successfully. Nov 4 04:21:20.359207 containerd[1975]: time="2025-11-04T04:21:20.358622100Z" level=info msg="CreateContainer within sandbox \"76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5\"" Nov 4 04:21:20.360296 containerd[1975]: time="2025-11-04T04:21:20.360172176Z" level=info msg="StartContainer for \"27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5\"" Nov 4 04:21:20.363376 containerd[1975]: time="2025-11-04T04:21:20.363269544Z" level=info msg="connecting to shim 27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5" address="unix:///run/containerd/s/18714f3ff3babd0989b1151c89d30d991ccdb9e1c75b0e8ff084a8dbaa0d308a" protocol=ttrpc version=3 Nov 4 04:21:20.407622 systemd[1]: Started cri-containerd-27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5.scope - libcontainer container 27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5. Nov 4 04:21:20.490907 containerd[1975]: time="2025-11-04T04:21:20.490838965Z" level=info msg="StartContainer for \"27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5\" returns successfully" Nov 4 04:21:20.520004 systemd[1]: cri-containerd-27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5.scope: Deactivated successfully. Nov 4 04:21:20.527365 containerd[1975]: time="2025-11-04T04:21:20.527261545Z" level=info msg="received exit event container_id:\"27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5\" id:\"27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5\" pid:4027 exited_at:{seconds:1762230080 nanos:524977453}" Nov 4 04:21:20.588682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27bf3f3c35ed7e0867f668367acbf48ef74b0b347f700a65a421291ac4aa43b5-rootfs.mount: Deactivated successfully. Nov 4 04:21:22.318729 kubelet[3429]: E1104 04:21:22.318089 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:23.641251 containerd[1975]: time="2025-11-04T04:21:23.640682032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:23.642735 containerd[1975]: time="2025-11-04T04:21:23.642674669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Nov 4 04:21:23.643655 containerd[1975]: time="2025-11-04T04:21:23.643616561Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:23.651135 containerd[1975]: time="2025-11-04T04:21:23.651053405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:23.656052 containerd[1975]: time="2025-11-04T04:21:23.655861613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 3.354925793s" Nov 4 04:21:23.656052 containerd[1975]: time="2025-11-04T04:21:23.655922405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 4 04:21:23.658814 containerd[1975]: time="2025-11-04T04:21:23.658085921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 04:21:23.693205 containerd[1975]: time="2025-11-04T04:21:23.693130769Z" level=info msg="CreateContainer within sandbox \"f193708d508789cf41574de9f82e815389a821eb3e5decc6bec424177a90939a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 04:21:23.711362 containerd[1975]: time="2025-11-04T04:21:23.710670089Z" level=info msg="Container 4685ae87db37c699e89ea1930bd81aaeddea6b17c68eda97be3650a3f6b10009: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:21:23.715950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207479806.mount: Deactivated successfully. Nov 4 04:21:23.731060 containerd[1975]: time="2025-11-04T04:21:23.730984553Z" level=info msg="CreateContainer within sandbox \"f193708d508789cf41574de9f82e815389a821eb3e5decc6bec424177a90939a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4685ae87db37c699e89ea1930bd81aaeddea6b17c68eda97be3650a3f6b10009\"" Nov 4 04:21:23.733242 containerd[1975]: time="2025-11-04T04:21:23.733129169Z" level=info msg="StartContainer for \"4685ae87db37c699e89ea1930bd81aaeddea6b17c68eda97be3650a3f6b10009\"" Nov 4 04:21:23.736066 containerd[1975]: time="2025-11-04T04:21:23.735982637Z" level=info msg="connecting to shim 4685ae87db37c699e89ea1930bd81aaeddea6b17c68eda97be3650a3f6b10009" address="unix:///run/containerd/s/6c1eebf7b0df00013ab1208cb64352ad4f12cd98428bacc08894dcb56f518ff2" protocol=ttrpc version=3 Nov 4 04:21:23.781651 systemd[1]: Started cri-containerd-4685ae87db37c699e89ea1930bd81aaeddea6b17c68eda97be3650a3f6b10009.scope - libcontainer container 4685ae87db37c699e89ea1930bd81aaeddea6b17c68eda97be3650a3f6b10009. Nov 4 04:21:23.877736 containerd[1975]: time="2025-11-04T04:21:23.877676394Z" level=info msg="StartContainer for \"4685ae87db37c699e89ea1930bd81aaeddea6b17c68eda97be3650a3f6b10009\" returns successfully" Nov 4 04:21:24.322031 kubelet[3429]: E1104 04:21:24.320709 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:24.626443 kubelet[3429]: I1104 04:21:24.626216 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-657b47dfbc-fqrs8" podStartSLOduration=2.941482825 podStartE2EDuration="7.626194349s" podCreationTimestamp="2025-11-04 04:21:17 +0000 UTC" firstStartedPulling="2025-11-04 04:21:18.973138825 +0000 UTC m=+42.914423362" lastFinishedPulling="2025-11-04 04:21:23.657850337 +0000 UTC m=+47.599134886" observedRunningTime="2025-11-04 04:21:24.606881189 +0000 UTC m=+48.548165738" watchObservedRunningTime="2025-11-04 04:21:24.626194349 +0000 UTC m=+48.567478886" Nov 4 04:21:26.319824 kubelet[3429]: E1104 04:21:26.318475 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:26.924011 containerd[1975]: time="2025-11-04T04:21:26.923938209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:26.927692 containerd[1975]: time="2025-11-04T04:21:26.927603345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Nov 4 04:21:26.929755 containerd[1975]: time="2025-11-04T04:21:26.929695377Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:26.935899 containerd[1975]: time="2025-11-04T04:21:26.935824785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:26.938196 containerd[1975]: time="2025-11-04T04:21:26.938133033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.279987184s" Nov 4 04:21:26.938196 containerd[1975]: time="2025-11-04T04:21:26.938188629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 4 04:21:26.946472 containerd[1975]: time="2025-11-04T04:21:26.946411173Z" level=info msg="CreateContainer within sandbox \"76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 04:21:26.967581 containerd[1975]: time="2025-11-04T04:21:26.967511697Z" level=info msg="Container b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:21:26.977063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017427803.mount: Deactivated successfully. Nov 4 04:21:26.992445 containerd[1975]: time="2025-11-04T04:21:26.991815969Z" level=info msg="CreateContainer within sandbox \"76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3\"" Nov 4 04:21:26.993008 containerd[1975]: time="2025-11-04T04:21:26.992967309Z" level=info msg="StartContainer for \"b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3\"" Nov 4 04:21:27.003761 containerd[1975]: time="2025-11-04T04:21:27.003693689Z" level=info msg="connecting to shim b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3" address="unix:///run/containerd/s/18714f3ff3babd0989b1151c89d30d991ccdb9e1c75b0e8ff084a8dbaa0d308a" protocol=ttrpc version=3 Nov 4 04:21:27.040627 systemd[1]: Started cri-containerd-b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3.scope - libcontainer container b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3. Nov 4 04:21:27.127825 containerd[1975]: time="2025-11-04T04:21:27.127767834Z" level=info msg="StartContainer for \"b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3\" returns successfully" Nov 4 04:21:27.976601 systemd[1]: cri-containerd-b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3.scope: Deactivated successfully. Nov 4 04:21:27.977513 systemd[1]: cri-containerd-b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3.scope: Consumed 901ms CPU time, 187.1M memory peak, 165.9M written to disk. Nov 4 04:21:27.979272 containerd[1975]: time="2025-11-04T04:21:27.979141246Z" level=info msg="received exit event container_id:\"b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3\" id:\"b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3\" pid:4130 exited_at:{seconds:1762230087 nanos:977660062}" Nov 4 04:21:28.022720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b37515753691afc4f29c9805be75fc938f88c19bbfa19b4ef4fa03ac762bcbb3-rootfs.mount: Deactivated successfully. Nov 4 04:21:28.043108 kubelet[3429]: I1104 04:21:28.043066 3429 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 04:21:28.150254 systemd[1]: Created slice kubepods-burstable-podc8441b17_4a0d_4406_88cf_62a8cb581f09.slice - libcontainer container kubepods-burstable-podc8441b17_4a0d_4406_88cf_62a8cb581f09.slice. Nov 4 04:21:28.185251 systemd[1]: Created slice kubepods-burstable-poda6a13d29_203e_4ccf_93b9_8514188fd7d2.slice - libcontainer container kubepods-burstable-poda6a13d29_203e_4ccf_93b9_8514188fd7d2.slice. Nov 4 04:21:28.205930 kubelet[3429]: I1104 04:21:28.205850 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8441b17-4a0d-4406-88cf-62a8cb581f09-config-volume\") pod \"coredns-674b8bbfcf-bfs5q\" (UID: \"c8441b17-4a0d-4406-88cf-62a8cb581f09\") " pod="kube-system/coredns-674b8bbfcf-bfs5q" Nov 4 04:21:28.205930 kubelet[3429]: I1104 04:21:28.205924 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtq2q\" (UniqueName: \"kubernetes.io/projected/a6a13d29-203e-4ccf-93b9-8514188fd7d2-kube-api-access-qtq2q\") pod \"coredns-674b8bbfcf-78l67\" (UID: \"a6a13d29-203e-4ccf-93b9-8514188fd7d2\") " pod="kube-system/coredns-674b8bbfcf-78l67" Nov 4 04:21:28.206192 kubelet[3429]: I1104 04:21:28.205973 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6a13d29-203e-4ccf-93b9-8514188fd7d2-config-volume\") pod \"coredns-674b8bbfcf-78l67\" (UID: \"a6a13d29-203e-4ccf-93b9-8514188fd7d2\") " pod="kube-system/coredns-674b8bbfcf-78l67" Nov 4 04:21:28.206192 kubelet[3429]: I1104 04:21:28.206014 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9kl\" (UniqueName: \"kubernetes.io/projected/c8441b17-4a0d-4406-88cf-62a8cb581f09-kube-api-access-gq9kl\") pod \"coredns-674b8bbfcf-bfs5q\" (UID: \"c8441b17-4a0d-4406-88cf-62a8cb581f09\") " pod="kube-system/coredns-674b8bbfcf-bfs5q" Nov 4 04:21:28.297184 systemd[1]: Created slice kubepods-besteffort-pod073104c0_4d4a_4e6b_bb61_421cfcd8940e.slice - libcontainer container kubepods-besteffort-pod073104c0_4d4a_4e6b_bb61_421cfcd8940e.slice. Nov 4 04:21:28.307909 kubelet[3429]: I1104 04:21:28.307810 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdv9\" (UniqueName: \"kubernetes.io/projected/073104c0-4d4a-4e6b-bb61-421cfcd8940e-kube-api-access-2pdv9\") pod \"calico-apiserver-67f8f67444-smqxz\" (UID: \"073104c0-4d4a-4e6b-bb61-421cfcd8940e\") " pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" Nov 4 04:21:28.307909 kubelet[3429]: I1104 04:21:28.307901 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/073104c0-4d4a-4e6b-bb61-421cfcd8940e-calico-apiserver-certs\") pod \"calico-apiserver-67f8f67444-smqxz\" (UID: \"073104c0-4d4a-4e6b-bb61-421cfcd8940e\") " pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" Nov 4 04:21:28.379667 systemd[1]: Created slice kubepods-besteffort-pod1e2e1aa1_fbd0_4783_998f_e142a3f6eab3.slice - libcontainer container kubepods-besteffort-pod1e2e1aa1_fbd0_4783_998f_e142a3f6eab3.slice. Nov 4 04:21:28.399249 systemd[1]: Created slice kubepods-besteffort-pod3bfc783e_7624_4984_a658_a4dceb99c885.slice - libcontainer container kubepods-besteffort-pod3bfc783e_7624_4984_a658_a4dceb99c885.slice. Nov 4 04:21:28.409383 kubelet[3429]: I1104 04:21:28.408396 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr6tl\" (UniqueName: \"kubernetes.io/projected/1e2e1aa1-fbd0-4783-998f-e142a3f6eab3-kube-api-access-cr6tl\") pod \"calico-kube-controllers-6fc896cb84-m6mvd\" (UID: \"1e2e1aa1-fbd0-4783-998f-e142a3f6eab3\") " pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" Nov 4 04:21:28.409383 kubelet[3429]: I1104 04:21:28.408498 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bfc783e-7624-4984-a658-a4dceb99c885-config\") pod \"goldmane-666569f655-x7r2n\" (UID: \"3bfc783e-7624-4984-a658-a4dceb99c885\") " pod="calico-system/goldmane-666569f655-x7r2n" Nov 4 04:21:28.409383 kubelet[3429]: I1104 04:21:28.408568 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e2e1aa1-fbd0-4783-998f-e142a3f6eab3-tigera-ca-bundle\") pod \"calico-kube-controllers-6fc896cb84-m6mvd\" (UID: \"1e2e1aa1-fbd0-4783-998f-e142a3f6eab3\") " pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" Nov 4 04:21:28.409383 kubelet[3429]: I1104 04:21:28.408727 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bfc783e-7624-4984-a658-a4dceb99c885-goldmane-ca-bundle\") pod \"goldmane-666569f655-x7r2n\" (UID: \"3bfc783e-7624-4984-a658-a4dceb99c885\") " pod="calico-system/goldmane-666569f655-x7r2n" Nov 4 04:21:28.409383 kubelet[3429]: I1104 04:21:28.408815 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3bfc783e-7624-4984-a658-a4dceb99c885-goldmane-key-pair\") pod \"goldmane-666569f655-x7r2n\" (UID: \"3bfc783e-7624-4984-a658-a4dceb99c885\") " pod="calico-system/goldmane-666569f655-x7r2n" Nov 4 04:21:28.409760 kubelet[3429]: I1104 04:21:28.408855 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhv6\" (UniqueName: \"kubernetes.io/projected/3bfc783e-7624-4984-a658-a4dceb99c885-kube-api-access-qbhv6\") pod \"goldmane-666569f655-x7r2n\" (UID: \"3bfc783e-7624-4984-a658-a4dceb99c885\") " pod="calico-system/goldmane-666569f655-x7r2n" Nov 4 04:21:28.473204 containerd[1975]: time="2025-11-04T04:21:28.472865504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfs5q,Uid:c8441b17-4a0d-4406-88cf-62a8cb581f09,Namespace:kube-system,Attempt:0,}" Nov 4 04:21:28.491405 systemd[1]: Created slice kubepods-besteffort-pod27fda10a_3169_4bf6_a620_503cc9dcb069.slice - libcontainer container kubepods-besteffort-pod27fda10a_3169_4bf6_a620_503cc9dcb069.slice. Nov 4 04:21:28.500429 containerd[1975]: time="2025-11-04T04:21:28.500142921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-78l67,Uid:a6a13d29-203e-4ccf-93b9-8514188fd7d2,Namespace:kube-system,Attempt:0,}" Nov 4 04:21:28.510508 systemd[1]: Created slice kubepods-besteffort-pod2a071bae_9a2c_47d2_99bf_2eaeb17bd59f.slice - libcontainer container kubepods-besteffort-pod2a071bae_9a2c_47d2_99bf_2eaeb17bd59f.slice. Nov 4 04:21:28.522911 containerd[1975]: time="2025-11-04T04:21:28.522782505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjdkx,Uid:27fda10a-3169-4bf6-a620-503cc9dcb069,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:28.565239 systemd[1]: Created slice kubepods-besteffort-poda361dba4_7339_43be_b37d_2bd7902bcd31.slice - libcontainer container kubepods-besteffort-poda361dba4_7339_43be_b37d_2bd7902bcd31.slice. Nov 4 04:21:28.609719 kubelet[3429]: I1104 04:21:28.609664 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a361dba4-7339-43be-b37d-2bd7902bcd31-calico-apiserver-certs\") pod \"calico-apiserver-67f8f67444-mj5dt\" (UID: \"a361dba4-7339-43be-b37d-2bd7902bcd31\") " pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" Nov 4 04:21:28.609887 kubelet[3429]: I1104 04:21:28.609740 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnjd\" (UniqueName: \"kubernetes.io/projected/a361dba4-7339-43be-b37d-2bd7902bcd31-kube-api-access-fvnjd\") pod \"calico-apiserver-67f8f67444-mj5dt\" (UID: \"a361dba4-7339-43be-b37d-2bd7902bcd31\") " pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" Nov 4 04:21:28.609887 kubelet[3429]: I1104 04:21:28.609786 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-backend-key-pair\") pod \"whisker-55565f7d46-26qrw\" (UID: \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\") " pod="calico-system/whisker-55565f7d46-26qrw" Nov 4 04:21:28.609887 kubelet[3429]: I1104 04:21:28.609825 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-ca-bundle\") pod \"whisker-55565f7d46-26qrw\" (UID: \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\") " pod="calico-system/whisker-55565f7d46-26qrw" Nov 4 04:21:28.609887 kubelet[3429]: I1104 04:21:28.609865 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rc7\" (UniqueName: \"kubernetes.io/projected/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-kube-api-access-w4rc7\") pod \"whisker-55565f7d46-26qrw\" (UID: \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\") " pod="calico-system/whisker-55565f7d46-26qrw" Nov 4 04:21:28.625213 containerd[1975]: time="2025-11-04T04:21:28.625164741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-smqxz,Uid:073104c0-4d4a-4e6b-bb61-421cfcd8940e,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:21:28.694502 containerd[1975]: time="2025-11-04T04:21:28.694423258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc896cb84-m6mvd,Uid:1e2e1aa1-fbd0-4783-998f-e142a3f6eab3,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:28.742349 containerd[1975]: time="2025-11-04T04:21:28.742226086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x7r2n,Uid:3bfc783e-7624-4984-a658-a4dceb99c885,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:28.822893 containerd[1975]: time="2025-11-04T04:21:28.822561862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55565f7d46-26qrw,Uid:2a071bae-9a2c-47d2-99bf-2eaeb17bd59f,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:28.878932 containerd[1975]: time="2025-11-04T04:21:28.878842031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-mj5dt,Uid:a361dba4-7339-43be-b37d-2bd7902bcd31,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:21:28.990157 containerd[1975]: time="2025-11-04T04:21:28.990086975Z" level=error msg="Failed to destroy network for sandbox \"e3680471c8ea72f2a302415d7da732fb0deea96abfc6b37132185353ad43f2ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.265004 containerd[1975]: time="2025-11-04T04:21:29.264914576Z" level=error msg="Failed to destroy network for sandbox \"f0cedd64b98e0c1212d5f17460c1e12966920174e64f92d19076670cc678952e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.270877 systemd[1]: run-netns-cni\x2d75945d02\x2df489\x2dfc25\x2d419b\x2d87f67b4212b8.mount: Deactivated successfully. Nov 4 04:21:29.302106 containerd[1975]: time="2025-11-04T04:21:29.302014509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfs5q,Uid:c8441b17-4a0d-4406-88cf-62a8cb581f09,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3680471c8ea72f2a302415d7da732fb0deea96abfc6b37132185353ad43f2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.302666 kubelet[3429]: E1104 04:21:29.302613 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3680471c8ea72f2a302415d7da732fb0deea96abfc6b37132185353ad43f2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.303836 kubelet[3429]: E1104 04:21:29.303279 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3680471c8ea72f2a302415d7da732fb0deea96abfc6b37132185353ad43f2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bfs5q" Nov 4 04:21:29.304420 kubelet[3429]: E1104 04:21:29.303978 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3680471c8ea72f2a302415d7da732fb0deea96abfc6b37132185353ad43f2ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bfs5q" Nov 4 04:21:29.304420 kubelet[3429]: E1104 04:21:29.304117 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bfs5q_kube-system(c8441b17-4a0d-4406-88cf-62a8cb581f09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bfs5q_kube-system(c8441b17-4a0d-4406-88cf-62a8cb581f09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3680471c8ea72f2a302415d7da732fb0deea96abfc6b37132185353ad43f2ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bfs5q" podUID="c8441b17-4a0d-4406-88cf-62a8cb581f09" Nov 4 04:21:29.336641 containerd[1975]: time="2025-11-04T04:21:29.336475929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-78l67,Uid:a6a13d29-203e-4ccf-93b9-8514188fd7d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0cedd64b98e0c1212d5f17460c1e12966920174e64f92d19076670cc678952e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.337481 kubelet[3429]: E1104 04:21:29.337020 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0cedd64b98e0c1212d5f17460c1e12966920174e64f92d19076670cc678952e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.337481 kubelet[3429]: E1104 04:21:29.337116 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0cedd64b98e0c1212d5f17460c1e12966920174e64f92d19076670cc678952e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-78l67" Nov 4 04:21:29.337481 kubelet[3429]: E1104 04:21:29.337153 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0cedd64b98e0c1212d5f17460c1e12966920174e64f92d19076670cc678952e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-78l67" Nov 4 04:21:29.337736 kubelet[3429]: E1104 04:21:29.337225 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-78l67_kube-system(a6a13d29-203e-4ccf-93b9-8514188fd7d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-78l67_kube-system(a6a13d29-203e-4ccf-93b9-8514188fd7d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0cedd64b98e0c1212d5f17460c1e12966920174e64f92d19076670cc678952e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-78l67" podUID="a6a13d29-203e-4ccf-93b9-8514188fd7d2" Nov 4 04:21:29.585755 containerd[1975]: time="2025-11-04T04:21:29.584509390Z" level=error msg="Failed to destroy network for sandbox \"120007ca91957f786a20cf84ba0804b29b3ac4db8ee12bb537b9002b707b2df5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.599025 containerd[1975]: time="2025-11-04T04:21:29.598893478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjdkx,Uid:27fda10a-3169-4bf6-a620-503cc9dcb069,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"120007ca91957f786a20cf84ba0804b29b3ac4db8ee12bb537b9002b707b2df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.600021 kubelet[3429]: E1104 04:21:29.599973 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120007ca91957f786a20cf84ba0804b29b3ac4db8ee12bb537b9002b707b2df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.600340 kubelet[3429]: E1104 04:21:29.600249 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120007ca91957f786a20cf84ba0804b29b3ac4db8ee12bb537b9002b707b2df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bjdkx" Nov 4 04:21:29.600534 kubelet[3429]: E1104 04:21:29.600462 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120007ca91957f786a20cf84ba0804b29b3ac4db8ee12bb537b9002b707b2df5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bjdkx" Nov 4 04:21:29.603195 kubelet[3429]: E1104 04:21:29.600667 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"120007ca91957f786a20cf84ba0804b29b3ac4db8ee12bb537b9002b707b2df5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:29.650297 containerd[1975]: time="2025-11-04T04:21:29.648431638Z" level=error msg="Failed to destroy network for sandbox \"aaea33bf43a31f09c558c2a4c21768b2f187d73c457d542baf0d976e218b2887\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.650297 containerd[1975]: time="2025-11-04T04:21:29.648705754Z" level=error msg="Failed to destroy network for sandbox \"d82cc2c76b7b5dc2583c848f374eb803cd8f7e5809feb976a6351d2b75cb8fa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.650297 containerd[1975]: time="2025-11-04T04:21:29.650227966Z" level=error msg="Failed to destroy network for sandbox \"df4e1229b6f6a3f640f5a3f40c5e3ddbe3f7274ab0307408fd91dd9ba9bf6792\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.654512 containerd[1975]: time="2025-11-04T04:21:29.654303874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 04:21:29.656849 containerd[1975]: time="2025-11-04T04:21:29.655905502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55565f7d46-26qrw,Uid:2a071bae-9a2c-47d2-99bf-2eaeb17bd59f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d82cc2c76b7b5dc2583c848f374eb803cd8f7e5809feb976a6351d2b75cb8fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.660866 kubelet[3429]: E1104 04:21:29.660707 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d82cc2c76b7b5dc2583c848f374eb803cd8f7e5809feb976a6351d2b75cb8fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.662617 containerd[1975]: time="2025-11-04T04:21:29.661680022Z" level=error msg="Failed to destroy network for sandbox \"e075425e0d61d7799d86c66d03c0b03d3c2e1a02e26ea8986d0f87a937a45aa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.663187 kubelet[3429]: E1104 04:21:29.662995 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d82cc2c76b7b5dc2583c848f374eb803cd8f7e5809feb976a6351d2b75cb8fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55565f7d46-26qrw" Nov 4 04:21:29.664383 kubelet[3429]: E1104 04:21:29.663426 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d82cc2c76b7b5dc2583c848f374eb803cd8f7e5809feb976a6351d2b75cb8fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55565f7d46-26qrw" Nov 4 04:21:29.665535 kubelet[3429]: E1104 04:21:29.664176 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55565f7d46-26qrw_calico-system(2a071bae-9a2c-47d2-99bf-2eaeb17bd59f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55565f7d46-26qrw_calico-system(2a071bae-9a2c-47d2-99bf-2eaeb17bd59f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d82cc2c76b7b5dc2583c848f374eb803cd8f7e5809feb976a6351d2b75cb8fa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55565f7d46-26qrw" podUID="2a071bae-9a2c-47d2-99bf-2eaeb17bd59f" Nov 4 04:21:29.666701 containerd[1975]: time="2025-11-04T04:21:29.666446902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-mj5dt,Uid:a361dba4-7339-43be-b37d-2bd7902bcd31,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaea33bf43a31f09c558c2a4c21768b2f187d73c457d542baf0d976e218b2887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.668562 containerd[1975]: time="2025-11-04T04:21:29.668493082Z" level=error msg="Failed to destroy network for sandbox \"db512110a2b455e295bac1a6c8f20ded4e0fa42a3b655cccfed235f748892c8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.669754 kubelet[3429]: E1104 04:21:29.669680 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaea33bf43a31f09c558c2a4c21768b2f187d73c457d542baf0d976e218b2887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.669925 kubelet[3429]: E1104 04:21:29.669761 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaea33bf43a31f09c558c2a4c21768b2f187d73c457d542baf0d976e218b2887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" Nov 4 04:21:29.669925 kubelet[3429]: E1104 04:21:29.669796 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaea33bf43a31f09c558c2a4c21768b2f187d73c457d542baf0d976e218b2887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" Nov 4 04:21:29.669925 kubelet[3429]: E1104 04:21:29.669878 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f8f67444-mj5dt_calico-apiserver(a361dba4-7339-43be-b37d-2bd7902bcd31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f8f67444-mj5dt_calico-apiserver(a361dba4-7339-43be-b37d-2bd7902bcd31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaea33bf43a31f09c558c2a4c21768b2f187d73c457d542baf0d976e218b2887\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:21:29.673366 containerd[1975]: time="2025-11-04T04:21:29.672907498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc896cb84-m6mvd,Uid:1e2e1aa1-fbd0-4783-998f-e142a3f6eab3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4e1229b6f6a3f640f5a3f40c5e3ddbe3f7274ab0307408fd91dd9ba9bf6792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.674446 kubelet[3429]: E1104 04:21:29.674343 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4e1229b6f6a3f640f5a3f40c5e3ddbe3f7274ab0307408fd91dd9ba9bf6792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.674737 kubelet[3429]: E1104 04:21:29.674427 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4e1229b6f6a3f640f5a3f40c5e3ddbe3f7274ab0307408fd91dd9ba9bf6792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" Nov 4 04:21:29.674737 kubelet[3429]: E1104 04:21:29.674506 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4e1229b6f6a3f640f5a3f40c5e3ddbe3f7274ab0307408fd91dd9ba9bf6792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" Nov 4 04:21:29.674737 kubelet[3429]: E1104 04:21:29.674596 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fc896cb84-m6mvd_calico-system(1e2e1aa1-fbd0-4783-998f-e142a3f6eab3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fc896cb84-m6mvd_calico-system(1e2e1aa1-fbd0-4783-998f-e142a3f6eab3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df4e1229b6f6a3f640f5a3f40c5e3ddbe3f7274ab0307408fd91dd9ba9bf6792\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:21:29.682461 containerd[1975]: time="2025-11-04T04:21:29.682081535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-smqxz,Uid:073104c0-4d4a-4e6b-bb61-421cfcd8940e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e075425e0d61d7799d86c66d03c0b03d3c2e1a02e26ea8986d0f87a937a45aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.683231 kubelet[3429]: E1104 04:21:29.682702 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e075425e0d61d7799d86c66d03c0b03d3c2e1a02e26ea8986d0f87a937a45aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.683231 kubelet[3429]: E1104 04:21:29.682787 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e075425e0d61d7799d86c66d03c0b03d3c2e1a02e26ea8986d0f87a937a45aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" Nov 4 04:21:29.683231 kubelet[3429]: E1104 04:21:29.682842 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e075425e0d61d7799d86c66d03c0b03d3c2e1a02e26ea8986d0f87a937a45aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" Nov 4 04:21:29.683650 kubelet[3429]: E1104 04:21:29.682913 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f8f67444-smqxz_calico-apiserver(073104c0-4d4a-4e6b-bb61-421cfcd8940e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f8f67444-smqxz_calico-apiserver(073104c0-4d4a-4e6b-bb61-421cfcd8940e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e075425e0d61d7799d86c66d03c0b03d3c2e1a02e26ea8986d0f87a937a45aa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:21:29.690221 containerd[1975]: time="2025-11-04T04:21:29.690126551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x7r2n,Uid:3bfc783e-7624-4984-a658-a4dceb99c885,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db512110a2b455e295bac1a6c8f20ded4e0fa42a3b655cccfed235f748892c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.691497 kubelet[3429]: E1104 04:21:29.691431 3429 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db512110a2b455e295bac1a6c8f20ded4e0fa42a3b655cccfed235f748892c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 04:21:29.691619 kubelet[3429]: E1104 04:21:29.691513 3429 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db512110a2b455e295bac1a6c8f20ded4e0fa42a3b655cccfed235f748892c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x7r2n" Nov 4 04:21:29.691619 kubelet[3429]: E1104 04:21:29.691570 3429 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db512110a2b455e295bac1a6c8f20ded4e0fa42a3b655cccfed235f748892c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x7r2n" Nov 4 04:21:29.691761 kubelet[3429]: E1104 04:21:29.691647 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x7r2n_calico-system(3bfc783e-7624-4984-a658-a4dceb99c885)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x7r2n_calico-system(3bfc783e-7624-4984-a658-a4dceb99c885)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db512110a2b455e295bac1a6c8f20ded4e0fa42a3b655cccfed235f748892c8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:21:30.022150 systemd[1]: run-netns-cni\x2d96bb604b\x2ddded\x2da9a9\x2d323d\x2d80b6951986ed.mount: Deactivated successfully. Nov 4 04:21:30.022347 systemd[1]: run-netns-cni\x2dfcc17a57\x2dd757\x2d94eb\x2d1100\x2ddd0f764a4f59.mount: Deactivated successfully. Nov 4 04:21:30.022478 systemd[1]: run-netns-cni\x2def963c47\x2da764\x2d1899\x2de5f9\x2deeab2f1c25ba.mount: Deactivated successfully. Nov 4 04:21:30.022636 systemd[1]: run-netns-cni\x2d98e403ae\x2d1d6e\x2d61e5\x2d3552\x2df17fe67de5d5.mount: Deactivated successfully. Nov 4 04:21:36.339619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357319764.mount: Deactivated successfully. Nov 4 04:21:36.407549 containerd[1975]: time="2025-11-04T04:21:36.407461888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:36.409556 containerd[1975]: time="2025-11-04T04:21:36.409454344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Nov 4 04:21:36.412069 containerd[1975]: time="2025-11-04T04:21:36.411979012Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:36.416860 containerd[1975]: time="2025-11-04T04:21:36.416747680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:21:36.418053 containerd[1975]: time="2025-11-04T04:21:36.417827332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.761782822s" Nov 4 04:21:36.418053 containerd[1975]: time="2025-11-04T04:21:36.417882040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 4 04:21:36.459560 containerd[1975]: time="2025-11-04T04:21:36.459462376Z" level=info msg="CreateContainer within sandbox \"76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 04:21:36.490009 containerd[1975]: time="2025-11-04T04:21:36.488663416Z" level=info msg="Container f7856338b72b60cd15a9824a788e721fe77f8a5192016759a0b9a293987408ba: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:21:36.539277 containerd[1975]: time="2025-11-04T04:21:36.539182277Z" level=info msg="CreateContainer within sandbox \"76d0e207d3726a3dbbc7da922b8887846443763d1ef4443108d2a28bdcbb7bbe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f7856338b72b60cd15a9824a788e721fe77f8a5192016759a0b9a293987408ba\"" Nov 4 04:21:36.541218 containerd[1975]: time="2025-11-04T04:21:36.541167845Z" level=info msg="StartContainer for \"f7856338b72b60cd15a9824a788e721fe77f8a5192016759a0b9a293987408ba\"" Nov 4 04:21:36.546193 containerd[1975]: time="2025-11-04T04:21:36.546011105Z" level=info msg="connecting to shim f7856338b72b60cd15a9824a788e721fe77f8a5192016759a0b9a293987408ba" address="unix:///run/containerd/s/18714f3ff3babd0989b1151c89d30d991ccdb9e1c75b0e8ff084a8dbaa0d308a" protocol=ttrpc version=3 Nov 4 04:21:36.620687 systemd[1]: Started cri-containerd-f7856338b72b60cd15a9824a788e721fe77f8a5192016759a0b9a293987408ba.scope - libcontainer container f7856338b72b60cd15a9824a788e721fe77f8a5192016759a0b9a293987408ba. Nov 4 04:21:36.748436 containerd[1975]: time="2025-11-04T04:21:36.748281642Z" level=info msg="StartContainer for \"f7856338b72b60cd15a9824a788e721fe77f8a5192016759a0b9a293987408ba\" returns successfully" Nov 4 04:21:37.160910 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 04:21:37.161107 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 04:21:37.498119 kubelet[3429]: I1104 04:21:37.496984 3429 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-ca-bundle\") pod \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\" (UID: \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\") " Nov 4 04:21:37.498119 kubelet[3429]: I1104 04:21:37.497108 3429 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-backend-key-pair\") pod \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\" (UID: \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\") " Nov 4 04:21:37.498119 kubelet[3429]: I1104 04:21:37.497156 3429 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4rc7\" (UniqueName: \"kubernetes.io/projected/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-kube-api-access-w4rc7\") pod \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\" (UID: \"2a071bae-9a2c-47d2-99bf-2eaeb17bd59f\") " Nov 4 04:21:37.501331 kubelet[3429]: I1104 04:21:37.500849 3429 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2a071bae-9a2c-47d2-99bf-2eaeb17bd59f" (UID: "2a071bae-9a2c-47d2-99bf-2eaeb17bd59f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 04:21:37.515611 kubelet[3429]: I1104 04:21:37.515550 3429 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-kube-api-access-w4rc7" (OuterVolumeSpecName: "kube-api-access-w4rc7") pod "2a071bae-9a2c-47d2-99bf-2eaeb17bd59f" (UID: "2a071bae-9a2c-47d2-99bf-2eaeb17bd59f"). InnerVolumeSpecName "kube-api-access-w4rc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 04:21:37.520957 systemd[1]: var-lib-kubelet-pods-2a071bae\x2d9a2c\x2d47d2\x2d99bf\x2d2eaeb17bd59f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw4rc7.mount: Deactivated successfully. Nov 4 04:21:37.529027 kubelet[3429]: I1104 04:21:37.521681 3429 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2a071bae-9a2c-47d2-99bf-2eaeb17bd59f" (UID: "2a071bae-9a2c-47d2-99bf-2eaeb17bd59f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 04:21:37.533130 systemd[1]: var-lib-kubelet-pods-2a071bae\x2d9a2c\x2d47d2\x2d99bf\x2d2eaeb17bd59f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 04:21:37.597892 kubelet[3429]: I1104 04:21:37.597822 3429 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-ca-bundle\") on node \"ip-172-31-28-40\" DevicePath \"\"" Nov 4 04:21:37.598291 kubelet[3429]: I1104 04:21:37.598124 3429 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-whisker-backend-key-pair\") on node \"ip-172-31-28-40\" DevicePath \"\"" Nov 4 04:21:37.598291 kubelet[3429]: I1104 04:21:37.598153 3429 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4rc7\" (UniqueName: \"kubernetes.io/projected/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f-kube-api-access-w4rc7\") on node \"ip-172-31-28-40\" DevicePath \"\"" Nov 4 04:21:37.766394 systemd[1]: Removed slice kubepods-besteffort-pod2a071bae_9a2c_47d2_99bf_2eaeb17bd59f.slice - libcontainer container kubepods-besteffort-pod2a071bae_9a2c_47d2_99bf_2eaeb17bd59f.slice. Nov 4 04:21:37.797830 kubelet[3429]: I1104 04:21:37.797739 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hbvcf" podStartSLOduration=2.30169326 podStartE2EDuration="19.797710327s" podCreationTimestamp="2025-11-04 04:21:18 +0000 UTC" firstStartedPulling="2025-11-04 04:21:18.923790097 +0000 UTC m=+42.865074634" lastFinishedPulling="2025-11-04 04:21:36.419807176 +0000 UTC m=+60.361091701" observedRunningTime="2025-11-04 04:21:37.794685043 +0000 UTC m=+61.735969604" watchObservedRunningTime="2025-11-04 04:21:37.797710327 +0000 UTC m=+61.738994900" Nov 4 04:21:37.958992 systemd[1]: Created slice kubepods-besteffort-pod857124b9_a647_4dd7_9ce8_99328261c03d.slice - libcontainer container kubepods-besteffort-pod857124b9_a647_4dd7_9ce8_99328261c03d.slice. Nov 4 04:21:38.002268 kubelet[3429]: I1104 04:21:38.002197 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/857124b9-a647-4dd7-9ce8-99328261c03d-whisker-backend-key-pair\") pod \"whisker-6597d6c5c6-2r6d9\" (UID: \"857124b9-a647-4dd7-9ce8-99328261c03d\") " pod="calico-system/whisker-6597d6c5c6-2r6d9" Nov 4 04:21:38.002475 kubelet[3429]: I1104 04:21:38.002278 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5jvd\" (UniqueName: \"kubernetes.io/projected/857124b9-a647-4dd7-9ce8-99328261c03d-kube-api-access-m5jvd\") pod \"whisker-6597d6c5c6-2r6d9\" (UID: \"857124b9-a647-4dd7-9ce8-99328261c03d\") " pod="calico-system/whisker-6597d6c5c6-2r6d9" Nov 4 04:21:38.004568 kubelet[3429]: I1104 04:21:38.004453 3429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/857124b9-a647-4dd7-9ce8-99328261c03d-whisker-ca-bundle\") pod \"whisker-6597d6c5c6-2r6d9\" (UID: \"857124b9-a647-4dd7-9ce8-99328261c03d\") " pod="calico-system/whisker-6597d6c5c6-2r6d9" Nov 4 04:21:38.268552 containerd[1975]: time="2025-11-04T04:21:38.268478489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6597d6c5c6-2r6d9,Uid:857124b9-a647-4dd7-9ce8-99328261c03d,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:38.323040 kubelet[3429]: I1104 04:21:38.322970 3429 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a071bae-9a2c-47d2-99bf-2eaeb17bd59f" path="/var/lib/kubelet/pods/2a071bae-9a2c-47d2-99bf-2eaeb17bd59f/volumes" Nov 4 04:21:39.611687 systemd-networkd[1736]: calica81685d0e9: Link UP Nov 4 04:21:39.615463 (udev-worker)[4432]: Network interface NamePolicy= disabled on kernel command line. Nov 4 04:21:39.617025 systemd-networkd[1736]: calica81685d0e9: Gained carrier Nov 4 04:21:39.766944 containerd[1975]: 2025-11-04 04:21:38.439 [INFO][4483] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 04:21:39.766944 containerd[1975]: 2025-11-04 04:21:39.261 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0 whisker-6597d6c5c6- calico-system 857124b9-a647-4dd7-9ce8-99328261c03d 933 0 2025-11-04 04:21:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6597d6c5c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-40 whisker-6597d6c5c6-2r6d9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calica81685d0e9 [] [] }} ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-" Nov 4 04:21:39.766944 containerd[1975]: 2025-11-04 04:21:39.261 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" Nov 4 04:21:39.766944 containerd[1975]: 2025-11-04 04:21:39.445 [INFO][4604] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" HandleID="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Workload="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.445 [INFO][4604] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" HandleID="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Workload="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033a4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-40", "pod":"whisker-6597d6c5c6-2r6d9", "timestamp":"2025-11-04 04:21:39.445404619 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.445 [INFO][4604] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.445 [INFO][4604] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.446 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.471 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" host="ip-172-31-28-40" Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.483 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.494 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.500 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:39.768496 containerd[1975]: 2025-11-04 04:21:39.508 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:39.768936 containerd[1975]: 2025-11-04 04:21:39.508 [INFO][4604] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" host="ip-172-31-28-40" Nov 4 04:21:39.768936 containerd[1975]: 2025-11-04 04:21:39.512 [INFO][4604] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5 Nov 4 04:21:39.768936 containerd[1975]: 2025-11-04 04:21:39.520 [INFO][4604] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" host="ip-172-31-28-40" Nov 4 04:21:39.768936 containerd[1975]: 2025-11-04 04:21:39.533 [INFO][4604] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.193/26] block=192.168.20.192/26 handle="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" host="ip-172-31-28-40" Nov 4 04:21:39.768936 containerd[1975]: 2025-11-04 04:21:39.533 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.193/26] handle="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" host="ip-172-31-28-40" Nov 4 04:21:39.768936 containerd[1975]: 2025-11-04 04:21:39.533 [INFO][4604] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:39.768936 containerd[1975]: 2025-11-04 04:21:39.533 [INFO][4604] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.193/26] IPv6=[] ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" HandleID="k8s-pod-network.c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Workload="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" Nov 4 04:21:39.771839 containerd[1975]: 2025-11-04 04:21:39.548 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0", GenerateName:"whisker-6597d6c5c6-", Namespace:"calico-system", SelfLink:"", UID:"857124b9-a647-4dd7-9ce8-99328261c03d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6597d6c5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"whisker-6597d6c5c6-2r6d9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica81685d0e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:39.771839 containerd[1975]: 2025-11-04 04:21:39.548 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.193/32] ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" Nov 4 04:21:39.772063 containerd[1975]: 2025-11-04 04:21:39.548 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica81685d0e9 ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" Nov 4 04:21:39.772063 containerd[1975]: 2025-11-04 04:21:39.638 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" Nov 4 04:21:39.772176 containerd[1975]: 2025-11-04 04:21:39.649 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0", GenerateName:"whisker-6597d6c5c6-", Namespace:"calico-system", SelfLink:"", UID:"857124b9-a647-4dd7-9ce8-99328261c03d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6597d6c5c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5", Pod:"whisker-6597d6c5c6-2r6d9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica81685d0e9", MAC:"22:31:46:29:77:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:39.772293 containerd[1975]: 2025-11-04 04:21:39.759 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" Namespace="calico-system" Pod="whisker-6597d6c5c6-2r6d9" WorkloadEndpoint="ip--172--31--28--40-k8s-whisker--6597d6c5c6--2r6d9-eth0" Nov 4 04:21:39.921863 containerd[1975]: time="2025-11-04T04:21:39.921695265Z" level=info msg="connecting to shim c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5" address="unix:///run/containerd/s/11d6b84c48a1d75fb0b75ee12f591fd44e80b070f87b9cb75edc27dacafbc2d9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:40.028167 systemd[1]: Started cri-containerd-c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5.scope - libcontainer container c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5. Nov 4 04:21:40.203137 containerd[1975]: time="2025-11-04T04:21:40.201710563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6597d6c5c6-2r6d9,Uid:857124b9-a647-4dd7-9ce8-99328261c03d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4af0f3522342189271c101dbdf31ade1883d79a1999b3cb6c3e6441d1d3c4e5\"" Nov 4 04:21:40.209699 containerd[1975]: time="2025-11-04T04:21:40.209604691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:21:40.320373 containerd[1975]: time="2025-11-04T04:21:40.320016607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfs5q,Uid:c8441b17-4a0d-4406-88cf-62a8cb581f09,Namespace:kube-system,Attempt:0,}" Nov 4 04:21:40.550720 containerd[1975]: time="2025-11-04T04:21:40.550256456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:40.552885 containerd[1975]: time="2025-11-04T04:21:40.552787869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:21:40.553040 containerd[1975]: time="2025-11-04T04:21:40.552841317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:40.553555 kubelet[3429]: E1104 04:21:40.553217 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:21:40.553555 kubelet[3429]: E1104 04:21:40.553287 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:21:40.562368 kubelet[3429]: E1104 04:21:40.559259 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c43005d46d6641cd888d007749657aec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5jvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6597d6c5c6-2r6d9_calico-system(857124b9-a647-4dd7-9ce8-99328261c03d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:40.565372 containerd[1975]: time="2025-11-04T04:21:40.564675021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:21:40.589826 (udev-worker)[4431]: Network interface NamePolicy= disabled on kernel command line. Nov 4 04:21:40.590898 systemd-networkd[1736]: cali7ab04475040: Link UP Nov 4 04:21:40.593547 systemd-networkd[1736]: cali7ab04475040: Gained carrier Nov 4 04:21:40.630297 containerd[1975]: 2025-11-04 04:21:40.432 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0 coredns-674b8bbfcf- kube-system c8441b17-4a0d-4406-88cf-62a8cb581f09 856 0 2025-11-04 04:20:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-40 coredns-674b8bbfcf-bfs5q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ab04475040 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-" Nov 4 04:21:40.630297 containerd[1975]: 2025-11-04 04:21:40.433 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" Nov 4 04:21:40.630297 containerd[1975]: 2025-11-04 04:21:40.482 [INFO][4719] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" HandleID="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Workload="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.482 [INFO][4719] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" HandleID="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Workload="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-40", "pod":"coredns-674b8bbfcf-bfs5q", "timestamp":"2025-11-04 04:21:40.48203258 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.482 [INFO][4719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.482 [INFO][4719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.482 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.499 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" host="ip-172-31-28-40" Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.507 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.520 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.525 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.529 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:40.630653 containerd[1975]: 2025-11-04 04:21:40.529 [INFO][4719] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" host="ip-172-31-28-40" Nov 4 04:21:40.631132 containerd[1975]: 2025-11-04 04:21:40.532 [INFO][4719] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33 Nov 4 04:21:40.631132 containerd[1975]: 2025-11-04 04:21:40.541 [INFO][4719] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" host="ip-172-31-28-40" Nov 4 04:21:40.631132 containerd[1975]: 2025-11-04 04:21:40.569 [INFO][4719] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.194/26] block=192.168.20.192/26 handle="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" host="ip-172-31-28-40" Nov 4 04:21:40.631132 containerd[1975]: 2025-11-04 04:21:40.569 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.194/26] handle="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" host="ip-172-31-28-40" Nov 4 04:21:40.631132 containerd[1975]: 2025-11-04 04:21:40.570 [INFO][4719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:40.631132 containerd[1975]: 2025-11-04 04:21:40.570 [INFO][4719] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.194/26] IPv6=[] ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" HandleID="k8s-pod-network.7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Workload="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" Nov 4 04:21:40.631528 containerd[1975]: 2025-11-04 04:21:40.580 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8441b17-4a0d-4406-88cf-62a8cb581f09", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"coredns-674b8bbfcf-bfs5q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ab04475040", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:40.631528 containerd[1975]: 2025-11-04 04:21:40.581 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.194/32] ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" Nov 4 04:21:40.631528 containerd[1975]: 2025-11-04 04:21:40.582 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ab04475040 ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" Nov 4 04:21:40.631528 containerd[1975]: 2025-11-04 04:21:40.588 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" Nov 4 04:21:40.631528 containerd[1975]: 2025-11-04 04:21:40.589 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8441b17-4a0d-4406-88cf-62a8cb581f09", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33", Pod:"coredns-674b8bbfcf-bfs5q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ab04475040", MAC:"fe:83:85:1d:06:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:40.631528 containerd[1975]: 2025-11-04 04:21:40.620 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" Namespace="kube-system" Pod="coredns-674b8bbfcf-bfs5q" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--bfs5q-eth0" Nov 4 04:21:40.686706 containerd[1975]: time="2025-11-04T04:21:40.686634549Z" level=info msg="connecting to shim 7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33" address="unix:///run/containerd/s/72ff21f3d781fec1144ee275a5d83f88be19f8209bdeb0555484d218474ee588" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:40.790895 systemd[1]: Started cri-containerd-7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33.scope - libcontainer container 7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33. Nov 4 04:21:40.826638 systemd-networkd[1736]: vxlan.calico: Link UP Nov 4 04:21:40.827871 systemd-networkd[1736]: vxlan.calico: Gained carrier Nov 4 04:21:40.834835 containerd[1975]: time="2025-11-04T04:21:40.834769822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:40.837081 containerd[1975]: time="2025-11-04T04:21:40.836982382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:21:40.837554 containerd[1975]: time="2025-11-04T04:21:40.837121726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:40.837987 kubelet[3429]: E1104 04:21:40.837495 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:21:40.837987 kubelet[3429]: E1104 04:21:40.837602 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:21:40.839892 kubelet[3429]: E1104 04:21:40.839765 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5jvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6597d6c5c6-2r6d9_calico-system(857124b9-a647-4dd7-9ce8-99328261c03d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:40.841882 kubelet[3429]: E1104 04:21:40.841796 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6597d6c5c6-2r6d9" podUID="857124b9-a647-4dd7-9ce8-99328261c03d" Nov 4 04:21:40.958203 containerd[1975]: time="2025-11-04T04:21:40.958106039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bfs5q,Uid:c8441b17-4a0d-4406-88cf-62a8cb581f09,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33\"" Nov 4 04:21:40.973716 containerd[1975]: time="2025-11-04T04:21:40.973655891Z" level=info msg="CreateContainer within sandbox \"7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:21:41.021096 containerd[1975]: time="2025-11-04T04:21:41.021004723Z" level=info msg="Container 5b2e27a410da28c14325031088e423aee1f11e80d37e69277e3d4fc58842f24f: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:21:41.022754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011698889.mount: Deactivated successfully. Nov 4 04:21:41.044243 containerd[1975]: time="2025-11-04T04:21:41.044179651Z" level=info msg="CreateContainer within sandbox \"7cc75a1be74d803ea99388a5e705ec9df15b4e5b17b11193ee10a226461caa33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b2e27a410da28c14325031088e423aee1f11e80d37e69277e3d4fc58842f24f\"" Nov 4 04:21:41.045220 containerd[1975]: time="2025-11-04T04:21:41.045164707Z" level=info msg="StartContainer for \"5b2e27a410da28c14325031088e423aee1f11e80d37e69277e3d4fc58842f24f\"" Nov 4 04:21:41.049210 containerd[1975]: time="2025-11-04T04:21:41.049134763Z" level=info msg="connecting to shim 5b2e27a410da28c14325031088e423aee1f11e80d37e69277e3d4fc58842f24f" address="unix:///run/containerd/s/72ff21f3d781fec1144ee275a5d83f88be19f8209bdeb0555484d218474ee588" protocol=ttrpc version=3 Nov 4 04:21:41.098641 systemd[1]: Started cri-containerd-5b2e27a410da28c14325031088e423aee1f11e80d37e69277e3d4fc58842f24f.scope - libcontainer container 5b2e27a410da28c14325031088e423aee1f11e80d37e69277e3d4fc58842f24f. Nov 4 04:21:41.184291 containerd[1975]: time="2025-11-04T04:21:41.184101128Z" level=info msg="StartContainer for \"5b2e27a410da28c14325031088e423aee1f11e80d37e69277e3d4fc58842f24f\" returns successfully" Nov 4 04:21:41.320727 containerd[1975]: time="2025-11-04T04:21:41.320607776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjdkx,Uid:27fda10a-3169-4bf6-a620-503cc9dcb069,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:41.321123 containerd[1975]: time="2025-11-04T04:21:41.321069692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc896cb84-m6mvd,Uid:1e2e1aa1-fbd0-4783-998f-e142a3f6eab3,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:41.321473 containerd[1975]: time="2025-11-04T04:21:41.320608256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-mj5dt,Uid:a361dba4-7339-43be-b37d-2bd7902bcd31,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:21:41.438404 systemd-networkd[1736]: calica81685d0e9: Gained IPv6LL Nov 4 04:21:41.757758 systemd-networkd[1736]: cali7ab04475040: Gained IPv6LL Nov 4 04:21:41.766476 kubelet[3429]: E1104 04:21:41.766299 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6597d6c5c6-2r6d9" podUID="857124b9-a647-4dd7-9ce8-99328261c03d" Nov 4 04:21:41.856588 kubelet[3429]: I1104 04:21:41.856501 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bfs5q" podStartSLOduration=59.856476803 podStartE2EDuration="59.856476803s" podCreationTimestamp="2025-11-04 04:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:21:41.853153475 +0000 UTC m=+65.794438024" watchObservedRunningTime="2025-11-04 04:21:41.856476803 +0000 UTC m=+65.797761352" Nov 4 04:21:41.885030 systemd-networkd[1736]: vxlan.calico: Gained IPv6LL Nov 4 04:21:41.904831 systemd-networkd[1736]: calibef89e115ec: Link UP Nov 4 04:21:41.906779 systemd-networkd[1736]: calibef89e115ec: Gained carrier Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.635 [INFO][4868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0 calico-apiserver-67f8f67444- calico-apiserver a361dba4-7339-43be-b37d-2bd7902bcd31 864 0 2025-11-04 04:20:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f8f67444 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-40 calico-apiserver-67f8f67444-mj5dt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibef89e115ec [] [] }} ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.636 [INFO][4868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.718 [INFO][4927] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" HandleID="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Workload="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.718 [INFO][4927] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" HandleID="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Workload="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-40", "pod":"calico-apiserver-67f8f67444-mj5dt", "timestamp":"2025-11-04 04:21:41.718254706 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.718 [INFO][4927] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.718 [INFO][4927] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.719 [INFO][4927] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.763 [INFO][4927] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.785 [INFO][4927] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.803 [INFO][4927] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.810 [INFO][4927] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.828 [INFO][4927] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.828 [INFO][4927] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.853 [INFO][4927] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.867 [INFO][4927] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.882 [INFO][4927] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.195/26] block=192.168.20.192/26 handle="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.883 [INFO][4927] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.195/26] handle="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" host="ip-172-31-28-40" Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.883 [INFO][4927] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:41.944527 containerd[1975]: 2025-11-04 04:21:41.883 [INFO][4927] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.195/26] IPv6=[] ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" HandleID="k8s-pod-network.12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Workload="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" Nov 4 04:21:41.948600 containerd[1975]: 2025-11-04 04:21:41.891 [INFO][4868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0", GenerateName:"calico-apiserver-67f8f67444-", Namespace:"calico-apiserver", SelfLink:"", UID:"a361dba4-7339-43be-b37d-2bd7902bcd31", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f8f67444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"calico-apiserver-67f8f67444-mj5dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibef89e115ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:41.948600 containerd[1975]: 2025-11-04 04:21:41.891 [INFO][4868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.195/32] ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" Nov 4 04:21:41.948600 containerd[1975]: 2025-11-04 04:21:41.891 [INFO][4868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibef89e115ec ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" Nov 4 04:21:41.948600 containerd[1975]: 2025-11-04 04:21:41.905 [INFO][4868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" Nov 4 04:21:41.948600 containerd[1975]: 2025-11-04 04:21:41.912 [INFO][4868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0", GenerateName:"calico-apiserver-67f8f67444-", Namespace:"calico-apiserver", SelfLink:"", UID:"a361dba4-7339-43be-b37d-2bd7902bcd31", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f8f67444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a", Pod:"calico-apiserver-67f8f67444-mj5dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibef89e115ec", MAC:"1e:da:86:bf:7c:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:41.948600 containerd[1975]: 2025-11-04 04:21:41.939 [INFO][4868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-mj5dt" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--mj5dt-eth0" Nov 4 04:21:42.020803 containerd[1975]: time="2025-11-04T04:21:42.020644952Z" level=info msg="connecting to shim 12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a" address="unix:///run/containerd/s/66938b5afc6b806972dc3f7345b73ba441af4891b64049f0d7ec2cc543e34d84" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:42.045122 systemd-networkd[1736]: cali3432dee96e3: Link UP Nov 4 04:21:42.047686 systemd-networkd[1736]: cali3432dee96e3: Gained carrier Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.525 [INFO][4851] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0 csi-node-driver- calico-system 27fda10a-3169-4bf6-a620-503cc9dcb069 758 0 2025-11-04 04:21:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-40 csi-node-driver-bjdkx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3432dee96e3 [] [] }} ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.527 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.742 [INFO][4902] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" HandleID="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Workload="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.743 [INFO][4902] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" HandleID="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Workload="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003787d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-40", "pod":"csi-node-driver-bjdkx", "timestamp":"2025-11-04 04:21:41.74274199 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.743 [INFO][4902] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.883 [INFO][4902] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.884 [INFO][4902] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.925 [INFO][4902] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.950 [INFO][4902] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.966 [INFO][4902] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.971 [INFO][4902] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.978 [INFO][4902] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.978 [INFO][4902] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.982 [INFO][4902] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2 Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:41.996 [INFO][4902] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:42.021 [INFO][4902] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.196/26] block=192.168.20.192/26 handle="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:42.021 [INFO][4902] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.196/26] handle="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" host="ip-172-31-28-40" Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:42.025 [INFO][4902] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:42.133162 containerd[1975]: 2025-11-04 04:21:42.025 [INFO][4902] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.196/26] IPv6=[] ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" HandleID="k8s-pod-network.48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Workload="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" Nov 4 04:21:42.135958 containerd[1975]: 2025-11-04 04:21:42.037 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27fda10a-3169-4bf6-a620-503cc9dcb069", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"csi-node-driver-bjdkx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3432dee96e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:42.135958 containerd[1975]: 2025-11-04 04:21:42.037 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.196/32] ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" Nov 4 04:21:42.135958 containerd[1975]: 2025-11-04 04:21:42.038 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3432dee96e3 ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" Nov 4 04:21:42.135958 containerd[1975]: 2025-11-04 04:21:42.048 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" Nov 4 04:21:42.135958 containerd[1975]: 2025-11-04 04:21:42.057 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27fda10a-3169-4bf6-a620-503cc9dcb069", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2", Pod:"csi-node-driver-bjdkx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3432dee96e3", MAC:"86:9e:99:2f:cf:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:42.135958 containerd[1975]: 2025-11-04 04:21:42.113 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" Namespace="calico-system" Pod="csi-node-driver-bjdkx" WorkloadEndpoint="ip--172--31--28--40-k8s-csi--node--driver--bjdkx-eth0" Nov 4 04:21:42.149616 systemd[1]: Started cri-containerd-12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a.scope - libcontainer container 12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a. Nov 4 04:21:42.212508 systemd-networkd[1736]: cali5217a0d92e1: Link UP Nov 4 04:21:42.217106 systemd-networkd[1736]: cali5217a0d92e1: Gained carrier Nov 4 04:21:42.251194 containerd[1975]: time="2025-11-04T04:21:42.250847145Z" level=info msg="connecting to shim 48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2" address="unix:///run/containerd/s/2cc46189e393304608ad3a1edde4073ed4c19ea41810fbfbadf73cc2e575835e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:41.536 [INFO][4852] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0 calico-kube-controllers-6fc896cb84- calico-system 1e2e1aa1-fbd0-4783-998f-e142a3f6eab3 861 0 2025-11-04 04:21:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fc896cb84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-40 calico-kube-controllers-6fc896cb84-m6mvd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5217a0d92e1 [] [] }} ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:41.537 [INFO][4852] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:41.770 [INFO][4907] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" HandleID="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Workload="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:41.771 [INFO][4907] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" HandleID="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Workload="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-40", "pod":"calico-kube-controllers-6fc896cb84-m6mvd", "timestamp":"2025-11-04 04:21:41.770693543 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:41.771 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.022 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.024 [INFO][4907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.083 [INFO][4907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.103 [INFO][4907] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.125 [INFO][4907] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.133 [INFO][4907] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.145 [INFO][4907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.146 [INFO][4907] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.159 [INFO][4907] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.170 [INFO][4907] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.192 [INFO][4907] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.197/26] block=192.168.20.192/26 handle="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.192 [INFO][4907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.197/26] handle="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" host="ip-172-31-28-40" Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.193 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:42.273906 containerd[1975]: 2025-11-04 04:21:42.193 [INFO][4907] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.197/26] IPv6=[] ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" HandleID="k8s-pod-network.e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Workload="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" Nov 4 04:21:42.274993 containerd[1975]: 2025-11-04 04:21:42.203 [INFO][4852] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0", GenerateName:"calico-kube-controllers-6fc896cb84-", Namespace:"calico-system", SelfLink:"", UID:"1e2e1aa1-fbd0-4783-998f-e142a3f6eab3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fc896cb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"calico-kube-controllers-6fc896cb84-m6mvd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5217a0d92e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:42.274993 containerd[1975]: 2025-11-04 04:21:42.204 [INFO][4852] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.197/32] ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" Nov 4 04:21:42.274993 containerd[1975]: 2025-11-04 04:21:42.205 [INFO][4852] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5217a0d92e1 ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" Nov 4 04:21:42.274993 containerd[1975]: 2025-11-04 04:21:42.217 [INFO][4852] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" Nov 4 04:21:42.274993 containerd[1975]: 2025-11-04 04:21:42.221 [INFO][4852] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0", GenerateName:"calico-kube-controllers-6fc896cb84-", Namespace:"calico-system", SelfLink:"", UID:"1e2e1aa1-fbd0-4783-998f-e142a3f6eab3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fc896cb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf", Pod:"calico-kube-controllers-6fc896cb84-m6mvd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5217a0d92e1", MAC:"82:d7:1f:67:2e:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:42.274993 containerd[1975]: 2025-11-04 04:21:42.259 [INFO][4852] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" Namespace="calico-system" Pod="calico-kube-controllers-6fc896cb84-m6mvd" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--kube--controllers--6fc896cb84--m6mvd-eth0" Nov 4 04:21:42.369657 systemd[1]: Started cri-containerd-48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2.scope - libcontainer container 48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2. Nov 4 04:21:42.384800 containerd[1975]: time="2025-11-04T04:21:42.384297250Z" level=info msg="connecting to shim e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf" address="unix:///run/containerd/s/cbdc2293597d9a9f24f65f6c4f64c5d87d70fa334f007ee25f6279446224f9a7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:42.486756 systemd[1]: Started cri-containerd-e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf.scope - libcontainer container e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf. Nov 4 04:21:42.539889 containerd[1975]: time="2025-11-04T04:21:42.539351362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-mj5dt,Uid:a361dba4-7339-43be-b37d-2bd7902bcd31,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"12b0ecb62d625b1c1736b20cbae0d320f5fd4774f9c716798d51da3746118b9a\"" Nov 4 04:21:42.543740 containerd[1975]: time="2025-11-04T04:21:42.543677842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:21:42.561386 containerd[1975]: time="2025-11-04T04:21:42.561150226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjdkx,Uid:27fda10a-3169-4bf6-a620-503cc9dcb069,Namespace:calico-system,Attempt:0,} returns sandbox id \"48f22c44182d5d2dadecc38b37dcd0db77bb29f778427748bc0f86d61025a2d2\"" Nov 4 04:21:42.606283 containerd[1975]: time="2025-11-04T04:21:42.606201767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fc896cb84-m6mvd,Uid:1e2e1aa1-fbd0-4783-998f-e142a3f6eab3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e655826af8bffb79a297627f1e584ad2578fda84c35d7b88eccb3839af3c0ecf\"" Nov 4 04:21:42.800372 containerd[1975]: time="2025-11-04T04:21:42.799908336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:42.802899 containerd[1975]: time="2025-11-04T04:21:42.802770600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:21:42.802899 containerd[1975]: time="2025-11-04T04:21:42.802841040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:42.803152 kubelet[3429]: E1104 04:21:42.803063 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:21:42.803152 kubelet[3429]: E1104 04:21:42.803118 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:21:42.803714 kubelet[3429]: E1104 04:21:42.803436 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvnjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f8f67444-mj5dt_calico-apiserver(a361dba4-7339-43be-b37d-2bd7902bcd31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:42.804948 kubelet[3429]: E1104 04:21:42.804571 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:21:42.805307 containerd[1975]: time="2025-11-04T04:21:42.804675360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:21:43.070561 containerd[1975]: time="2025-11-04T04:21:43.069782397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:43.073008 containerd[1975]: time="2025-11-04T04:21:43.072949413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:21:43.073306 containerd[1975]: time="2025-11-04T04:21:43.073138989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:43.073686 kubelet[3429]: E1104 04:21:43.073604 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:21:43.073686 kubelet[3429]: E1104 04:21:43.073670 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:21:43.074095 containerd[1975]: time="2025-11-04T04:21:43.074042097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:21:43.074574 kubelet[3429]: E1104 04:21:43.074488 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7p6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:43.322108 containerd[1975]: time="2025-11-04T04:21:43.320845222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-smqxz,Uid:073104c0-4d4a-4e6b-bb61-421cfcd8940e,Namespace:calico-apiserver,Attempt:0,}" Nov 4 04:21:43.322948 containerd[1975]: time="2025-11-04T04:21:43.322905730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x7r2n,Uid:3bfc783e-7624-4984-a658-a4dceb99c885,Namespace:calico-system,Attempt:0,}" Nov 4 04:21:43.323195 containerd[1975]: time="2025-11-04T04:21:43.322916602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-78l67,Uid:a6a13d29-203e-4ccf-93b9-8514188fd7d2,Namespace:kube-system,Attempt:0,}" Nov 4 04:21:43.369870 containerd[1975]: time="2025-11-04T04:21:43.369457018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:43.382937 containerd[1975]: time="2025-11-04T04:21:43.382133039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:21:43.386273 kubelet[3429]: E1104 04:21:43.385007 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:21:43.386273 kubelet[3429]: E1104 04:21:43.385467 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:21:43.386273 kubelet[3429]: E1104 04:21:43.385822 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr6tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6fc896cb84-m6mvd_calico-system(1e2e1aa1-fbd0-4783-998f-e142a3f6eab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:43.388137 kubelet[3429]: E1104 04:21:43.387669 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:21:43.390333 containerd[1975]: time="2025-11-04T04:21:43.384460091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:43.390643 containerd[1975]: time="2025-11-04T04:21:43.390590711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:21:43.484570 systemd-networkd[1736]: cali5217a0d92e1: Gained IPv6LL Nov 4 04:21:43.613863 systemd-networkd[1736]: cali3432dee96e3: Gained IPv6LL Nov 4 04:21:43.685212 containerd[1975]: time="2025-11-04T04:21:43.685160280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:43.691361 containerd[1975]: time="2025-11-04T04:21:43.690912060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:21:43.691712 containerd[1975]: time="2025-11-04T04:21:43.691648068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:43.693137 kubelet[3429]: E1104 04:21:43.693006 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:21:43.693705 kubelet[3429]: E1104 04:21:43.693467 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:21:43.695295 kubelet[3429]: E1104 04:21:43.695183 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7p6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:43.696629 kubelet[3429]: E1104 04:21:43.696476 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:43.783085 kubelet[3429]: E1104 04:21:43.782748 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:21:43.783085 kubelet[3429]: E1104 04:21:43.783034 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:21:43.789356 kubelet[3429]: E1104 04:21:43.788691 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:43.828032 systemd[1]: Started sshd@7-172.31.28.40:22-147.75.109.163:38156.service - OpenSSH per-connection server daemon (147.75.109.163:38156). Nov 4 04:21:43.868904 systemd-networkd[1736]: calibef89e115ec: Gained IPv6LL Nov 4 04:21:44.025651 systemd-networkd[1736]: cali17faebc2fbe: Link UP Nov 4 04:21:44.032154 systemd-networkd[1736]: cali17faebc2fbe: Gained carrier Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.513 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0 goldmane-666569f655- calico-system 3bfc783e-7624-4984-a658-a4dceb99c885 862 0 2025-11-04 04:21:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-40 goldmane-666569f655-x7r2n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali17faebc2fbe [] [] }} ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.513 [INFO][5104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.687 [INFO][5142] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" HandleID="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Workload="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.689 [INFO][5142] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" HandleID="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Workload="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330400), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-40", "pod":"goldmane-666569f655-x7r2n", "timestamp":"2025-11-04 04:21:43.687958596 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.689 [INFO][5142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.689 [INFO][5142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.689 [INFO][5142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.754 [INFO][5142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.787 [INFO][5142] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.803 [INFO][5142] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.833 [INFO][5142] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.894 [INFO][5142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.898 [INFO][5142] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.910 [INFO][5142] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324 Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.933 [INFO][5142] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.974 [INFO][5142] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.198/26] block=192.168.20.192/26 handle="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.974 [INFO][5142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.198/26] handle="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" host="ip-172-31-28-40" Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.975 [INFO][5142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:44.098094 containerd[1975]: 2025-11-04 04:21:43.975 [INFO][5142] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.198/26] IPv6=[] ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" HandleID="k8s-pod-network.b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Workload="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" Nov 4 04:21:44.100868 containerd[1975]: 2025-11-04 04:21:43.987 [INFO][5104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3bfc783e-7624-4984-a658-a4dceb99c885", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"goldmane-666569f655-x7r2n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali17faebc2fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:44.100868 containerd[1975]: 2025-11-04 04:21:43.988 [INFO][5104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.198/32] ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" Nov 4 04:21:44.100868 containerd[1975]: 2025-11-04 04:21:43.988 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17faebc2fbe ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" Nov 4 04:21:44.100868 containerd[1975]: 2025-11-04 04:21:44.044 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" Nov 4 04:21:44.100868 containerd[1975]: 2025-11-04 04:21:44.045 [INFO][5104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3bfc783e-7624-4984-a658-a4dceb99c885", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324", Pod:"goldmane-666569f655-x7r2n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali17faebc2fbe", MAC:"36:7f:98:0b:9b:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:44.100868 containerd[1975]: 2025-11-04 04:21:44.088 [INFO][5104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" Namespace="calico-system" Pod="goldmane-666569f655-x7r2n" WorkloadEndpoint="ip--172--31--28--40-k8s-goldmane--666569f655--x7r2n-eth0" Nov 4 04:21:44.115447 sshd[5168]: Accepted publickey for core from 147.75.109.163 port 38156 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:21:44.120196 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:21:44.139418 systemd-logind[1946]: New session 8 of user core. Nov 4 04:21:44.149662 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 04:21:44.208174 systemd-networkd[1736]: calid7c840f7eca: Link UP Nov 4 04:21:44.217688 systemd-networkd[1736]: calid7c840f7eca: Gained carrier Nov 4 04:21:44.249112 containerd[1975]: time="2025-11-04T04:21:44.248611127Z" level=info msg="connecting to shim b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324" address="unix:///run/containerd/s/775bfbcb247de0f014b403f4a6fcd9e51f410ac14aa3d258dc574685095be956" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:43.546 [INFO][5118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0 coredns-674b8bbfcf- kube-system a6a13d29-203e-4ccf-93b9-8514188fd7d2 859 0 2025-11-04 04:20:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-40 coredns-674b8bbfcf-78l67 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7c840f7eca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:43.546 [INFO][5118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:43.734 [INFO][5154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" HandleID="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Workload="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:43.735 [INFO][5154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" HandleID="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Workload="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d610), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-40", "pod":"coredns-674b8bbfcf-78l67", "timestamp":"2025-11-04 04:21:43.734130912 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:43.736 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:43.975 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:43.979 [INFO][5154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.049 [INFO][5154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.076 [INFO][5154] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.098 [INFO][5154] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.106 [INFO][5154] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.131 [INFO][5154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.133 [INFO][5154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.136 [INFO][5154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033 Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.156 [INFO][5154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.183 [INFO][5154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.199/26] block=192.168.20.192/26 handle="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.183 [INFO][5154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.199/26] handle="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" host="ip-172-31-28-40" Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.183 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:44.277733 containerd[1975]: 2025-11-04 04:21:44.183 [INFO][5154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.199/26] IPv6=[] ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" HandleID="k8s-pod-network.40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Workload="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" Nov 4 04:21:44.282677 containerd[1975]: 2025-11-04 04:21:44.197 [INFO][5118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a6a13d29-203e-4ccf-93b9-8514188fd7d2", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"coredns-674b8bbfcf-78l67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7c840f7eca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:44.282677 containerd[1975]: 2025-11-04 04:21:44.198 [INFO][5118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.199/32] ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" Nov 4 04:21:44.282677 containerd[1975]: 2025-11-04 04:21:44.198 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7c840f7eca ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" Nov 4 04:21:44.282677 containerd[1975]: 2025-11-04 04:21:44.222 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" Nov 4 04:21:44.282677 containerd[1975]: 2025-11-04 04:21:44.226 [INFO][5118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a6a13d29-203e-4ccf-93b9-8514188fd7d2", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033", Pod:"coredns-674b8bbfcf-78l67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7c840f7eca", MAC:"1e:b5:56:d3:6b:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:44.282677 containerd[1975]: 2025-11-04 04:21:44.250 [INFO][5118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" Namespace="kube-system" Pod="coredns-674b8bbfcf-78l67" WorkloadEndpoint="ip--172--31--28--40-k8s-coredns--674b8bbfcf--78l67-eth0" Nov 4 04:21:44.410935 systemd[1]: Started cri-containerd-b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324.scope - libcontainer container b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324. Nov 4 04:21:44.478775 systemd-networkd[1736]: cali0183566c024: Link UP Nov 4 04:21:44.483507 systemd-networkd[1736]: cali0183566c024: Gained carrier Nov 4 04:21:44.549348 containerd[1975]: time="2025-11-04T04:21:44.547687872Z" level=info msg="connecting to shim 40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033" address="unix:///run/containerd/s/4ed25ca64f8422fce72f07752e686c945893553a0d03829dec503a99a6661f95" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:43.740 [INFO][5108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0 calico-apiserver-67f8f67444- calico-apiserver 073104c0-4d4a-4e6b-bb61-421cfcd8940e 860 0 2025-11-04 04:20:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f8f67444 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-40 calico-apiserver-67f8f67444-smqxz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0183566c024 [] [] }} ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:43.740 [INFO][5108] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.047 [INFO][5163] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" HandleID="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Workload="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.047 [INFO][5163] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" HandleID="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Workload="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-40", "pod":"calico-apiserver-67f8f67444-smqxz", "timestamp":"2025-11-04 04:21:44.047028202 +0000 UTC"}, Hostname:"ip-172-31-28-40", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.047 [INFO][5163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.183 [INFO][5163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.183 [INFO][5163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-40' Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.254 [INFO][5163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.294 [INFO][5163] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.304 [INFO][5163] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.314 [INFO][5163] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.341 [INFO][5163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.342 [INFO][5163] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.355 [INFO][5163] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.380 [INFO][5163] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.433 [INFO][5163] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.200/26] block=192.168.20.192/26 handle="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.437 [INFO][5163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.200/26] handle="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" host="ip-172-31-28-40" Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.440 [INFO][5163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 04:21:44.552227 containerd[1975]: 2025-11-04 04:21:44.440 [INFO][5163] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.200/26] IPv6=[] ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" HandleID="k8s-pod-network.aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Workload="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" Nov 4 04:21:44.555503 containerd[1975]: 2025-11-04 04:21:44.456 [INFO][5108] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0", GenerateName:"calico-apiserver-67f8f67444-", Namespace:"calico-apiserver", SelfLink:"", UID:"073104c0-4d4a-4e6b-bb61-421cfcd8940e", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f8f67444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"", Pod:"calico-apiserver-67f8f67444-smqxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0183566c024", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:44.555503 containerd[1975]: 2025-11-04 04:21:44.457 [INFO][5108] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.200/32] ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" Nov 4 04:21:44.555503 containerd[1975]: 2025-11-04 04:21:44.457 [INFO][5108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0183566c024 ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" Nov 4 04:21:44.555503 containerd[1975]: 2025-11-04 04:21:44.487 [INFO][5108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" Nov 4 04:21:44.555503 containerd[1975]: 2025-11-04 04:21:44.491 [INFO][5108] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0", GenerateName:"calico-apiserver-67f8f67444-", Namespace:"calico-apiserver", SelfLink:"", UID:"073104c0-4d4a-4e6b-bb61-421cfcd8940e", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 4, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f8f67444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-40", ContainerID:"aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d", Pod:"calico-apiserver-67f8f67444-smqxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0183566c024", MAC:"9a:0b:bd:3f:ef:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 04:21:44.555503 containerd[1975]: 2025-11-04 04:21:44.534 [INFO][5108] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" Namespace="calico-apiserver" Pod="calico-apiserver-67f8f67444-smqxz" WorkloadEndpoint="ip--172--31--28--40-k8s-calico--apiserver--67f8f67444--smqxz-eth0" Nov 4 04:21:44.699806 systemd[1]: Started cri-containerd-40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033.scope - libcontainer container 40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033. Nov 4 04:21:44.701507 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Nov 4 04:21:44.702899 sshd[5186]: Connection closed by 147.75.109.163 port 38156 Nov 4 04:21:44.712366 containerd[1975]: time="2025-11-04T04:21:44.712198237Z" level=info msg="connecting to shim aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d" address="unix:///run/containerd/s/24c1b7aeb48a8ff47c5e02ab29c9b0b38fd32bdd7fa726047fbf919446f63ea3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:21:44.720201 systemd[1]: sshd@7-172.31.28.40:22-147.75.109.163:38156.service: Deactivated successfully. Nov 4 04:21:44.731136 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 04:21:44.739096 systemd-logind[1946]: Session 8 logged out. Waiting for processes to exit. Nov 4 04:21:44.745106 systemd-logind[1946]: Removed session 8. Nov 4 04:21:44.814623 systemd[1]: Started cri-containerd-aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d.scope - libcontainer container aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d. Nov 4 04:21:44.915408 containerd[1975]: time="2025-11-04T04:21:44.915106634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-78l67,Uid:a6a13d29-203e-4ccf-93b9-8514188fd7d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033\"" Nov 4 04:21:44.935794 containerd[1975]: time="2025-11-04T04:21:44.935710274Z" level=info msg="CreateContainer within sandbox \"40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:21:44.958343 containerd[1975]: time="2025-11-04T04:21:44.957312890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x7r2n,Uid:3bfc783e-7624-4984-a658-a4dceb99c885,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4a8e0f09bd93fcad786c737373b606fbeeea41a73ddbeba1e64c82292cde324\"" Nov 4 04:21:44.966167 containerd[1975]: time="2025-11-04T04:21:44.965737382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:21:44.979615 containerd[1975]: time="2025-11-04T04:21:44.979305290Z" level=info msg="Container 57af8fb078874cac9102e04ca721100a65fc70c0886bd8abfb8c605e514ccd12: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:21:45.008083 containerd[1975]: time="2025-11-04T04:21:45.007800887Z" level=info msg="CreateContainer within sandbox \"40c02244ee598e7136ce9d2363be8567da36616862fb998feda4b494e21ec033\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57af8fb078874cac9102e04ca721100a65fc70c0886bd8abfb8c605e514ccd12\"" Nov 4 04:21:45.012404 containerd[1975]: time="2025-11-04T04:21:45.011053199Z" level=info msg="StartContainer for \"57af8fb078874cac9102e04ca721100a65fc70c0886bd8abfb8c605e514ccd12\"" Nov 4 04:21:45.020793 containerd[1975]: time="2025-11-04T04:21:45.020571263Z" level=info msg="connecting to shim 57af8fb078874cac9102e04ca721100a65fc70c0886bd8abfb8c605e514ccd12" address="unix:///run/containerd/s/4ed25ca64f8422fce72f07752e686c945893553a0d03829dec503a99a6661f95" protocol=ttrpc version=3 Nov 4 04:21:45.079937 systemd[1]: Started cri-containerd-57af8fb078874cac9102e04ca721100a65fc70c0886bd8abfb8c605e514ccd12.scope - libcontainer container 57af8fb078874cac9102e04ca721100a65fc70c0886bd8abfb8c605e514ccd12. Nov 4 04:21:45.187846 containerd[1975]: time="2025-11-04T04:21:45.187784412Z" level=info msg="StartContainer for \"57af8fb078874cac9102e04ca721100a65fc70c0886bd8abfb8c605e514ccd12\" returns successfully" Nov 4 04:21:45.234464 containerd[1975]: time="2025-11-04T04:21:45.234229500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f8f67444-smqxz,Uid:073104c0-4d4a-4e6b-bb61-421cfcd8940e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aeb87d2bf00b115143802b4f481caeecb040256226a37e2794a0f0b56ec6d03d\"" Nov 4 04:21:45.279618 containerd[1975]: time="2025-11-04T04:21:45.279547776Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:45.282112 containerd[1975]: time="2025-11-04T04:21:45.281985984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:21:45.282253 containerd[1975]: time="2025-11-04T04:21:45.282049272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:45.282644 kubelet[3429]: E1104 04:21:45.282567 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:21:45.283229 kubelet[3429]: E1104 04:21:45.282639 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:21:45.283229 kubelet[3429]: E1104 04:21:45.282912 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbhv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x7r2n_calico-system(3bfc783e-7624-4984-a658-a4dceb99c885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:45.284459 kubelet[3429]: E1104 04:21:45.284404 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:21:45.284904 containerd[1975]: time="2025-11-04T04:21:45.284855760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:21:45.468652 systemd-networkd[1736]: calid7c840f7eca: Gained IPv6LL Nov 4 04:21:45.593456 containerd[1975]: time="2025-11-04T04:21:45.593223734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:45.595593 containerd[1975]: time="2025-11-04T04:21:45.595522202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:21:45.595733 containerd[1975]: time="2025-11-04T04:21:45.595645166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:45.595956 kubelet[3429]: E1104 04:21:45.595901 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:21:45.596044 kubelet[3429]: E1104 04:21:45.595970 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:21:45.596260 kubelet[3429]: E1104 04:21:45.596166 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pdv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f8f67444-smqxz_calico-apiserver(073104c0-4d4a-4e6b-bb61-421cfcd8940e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:45.596568 systemd-networkd[1736]: cali0183566c024: Gained IPv6LL Nov 4 04:21:45.598413 kubelet[3429]: E1104 04:21:45.597701 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:21:45.787528 kubelet[3429]: E1104 04:21:45.787177 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:21:45.804282 kubelet[3429]: E1104 04:21:45.804155 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:21:45.871896 kubelet[3429]: I1104 04:21:45.871137 3429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-78l67" podStartSLOduration=63.871111635 podStartE2EDuration="1m3.871111635s" podCreationTimestamp="2025-11-04 04:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:21:45.868434723 +0000 UTC m=+69.809719260" watchObservedRunningTime="2025-11-04 04:21:45.871111635 +0000 UTC m=+69.812396184" Nov 4 04:21:45.980558 systemd-networkd[1736]: cali17faebc2fbe: Gained IPv6LL Nov 4 04:21:46.811106 kubelet[3429]: E1104 04:21:46.810959 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:21:46.813343 kubelet[3429]: E1104 04:21:46.813253 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:21:48.856877 ntpd[1936]: Listen normally on 6 vxlan.calico 192.168.20.192:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 6 vxlan.calico 192.168.20.192:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 7 calica81685d0e9 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 8 cali7ab04475040 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 9 vxlan.calico [fe80::6469:cfff:fea4:7b89%6]:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 10 calibef89e115ec [fe80::ecee:eeff:feee:eeee%9]:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 11 cali3432dee96e3 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 12 cali5217a0d92e1 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 13 cali17faebc2fbe [fe80::ecee:eeff:feee:eeee%12]:123 Nov 4 04:21:48.858294 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 14 calid7c840f7eca [fe80::ecee:eeff:feee:eeee%13]:123 Nov 4 04:21:48.856961 ntpd[1936]: Listen normally on 7 calica81685d0e9 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 4 04:21:48.859650 ntpd[1936]: 4 Nov 04:21:48 ntpd[1936]: Listen normally on 15 cali0183566c024 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 4 04:21:48.857007 ntpd[1936]: Listen normally on 8 cali7ab04475040 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 4 04:21:48.857051 ntpd[1936]: Listen normally on 9 vxlan.calico [fe80::6469:cfff:fea4:7b89%6]:123 Nov 4 04:21:48.857094 ntpd[1936]: Listen normally on 10 calibef89e115ec [fe80::ecee:eeff:feee:eeee%9]:123 Nov 4 04:21:48.857145 ntpd[1936]: Listen normally on 11 cali3432dee96e3 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 4 04:21:48.857193 ntpd[1936]: Listen normally on 12 cali5217a0d92e1 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 4 04:21:48.857240 ntpd[1936]: Listen normally on 13 cali17faebc2fbe [fe80::ecee:eeff:feee:eeee%12]:123 Nov 4 04:21:48.857283 ntpd[1936]: Listen normally on 14 calid7c840f7eca [fe80::ecee:eeff:feee:eeee%13]:123 Nov 4 04:21:48.858480 ntpd[1936]: Listen normally on 15 cali0183566c024 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 4 04:21:49.738534 systemd[1]: Started sshd@8-172.31.28.40:22-147.75.109.163:38158.service - OpenSSH per-connection server daemon (147.75.109.163:38158). Nov 4 04:21:49.927516 sshd[5404]: Accepted publickey for core from 147.75.109.163 port 38158 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:21:49.930174 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:21:49.941053 systemd-logind[1946]: New session 9 of user core. Nov 4 04:21:49.946615 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 04:21:50.142944 sshd[5407]: Connection closed by 147.75.109.163 port 38158 Nov 4 04:21:50.143385 sshd-session[5404]: pam_unix(sshd:session): session closed for user core Nov 4 04:21:50.158374 systemd[1]: sshd@8-172.31.28.40:22-147.75.109.163:38158.service: Deactivated successfully. Nov 4 04:21:50.162647 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 04:21:50.165011 systemd-logind[1946]: Session 9 logged out. Waiting for processes to exit. Nov 4 04:21:50.169459 systemd-logind[1946]: Removed session 9. Nov 4 04:21:54.321716 containerd[1975]: time="2025-11-04T04:21:54.320804829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:21:54.577125 containerd[1975]: time="2025-11-04T04:21:54.576830914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:54.579543 containerd[1975]: time="2025-11-04T04:21:54.579409642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:21:54.579543 containerd[1975]: time="2025-11-04T04:21:54.579483838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:54.579905 kubelet[3429]: E1104 04:21:54.579857 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:21:54.581174 kubelet[3429]: E1104 04:21:54.580426 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:21:54.581174 kubelet[3429]: E1104 04:21:54.580593 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c43005d46d6641cd888d007749657aec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5jvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6597d6c5c6-2r6d9_calico-system(857124b9-a647-4dd7-9ce8-99328261c03d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:54.585912 containerd[1975]: time="2025-11-04T04:21:54.585661018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:21:54.811175 containerd[1975]: time="2025-11-04T04:21:54.811092227Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:54.813548 containerd[1975]: time="2025-11-04T04:21:54.813480803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:21:54.813648 containerd[1975]: time="2025-11-04T04:21:54.813598067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:54.813857 kubelet[3429]: E1104 04:21:54.813803 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:21:54.814033 kubelet[3429]: E1104 04:21:54.813871 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:21:54.814226 kubelet[3429]: E1104 04:21:54.814049 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5jvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6597d6c5c6-2r6d9_calico-system(857124b9-a647-4dd7-9ce8-99328261c03d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:54.815674 kubelet[3429]: E1104 04:21:54.815598 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6597d6c5c6-2r6d9" podUID="857124b9-a647-4dd7-9ce8-99328261c03d" Nov 4 04:21:55.186807 systemd[1]: Started sshd@9-172.31.28.40:22-147.75.109.163:40004.service - OpenSSH per-connection server daemon (147.75.109.163:40004). Nov 4 04:21:55.323138 containerd[1975]: time="2025-11-04T04:21:55.322637098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:21:55.383303 sshd[5430]: Accepted publickey for core from 147.75.109.163 port 40004 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:21:55.385794 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:21:55.393893 systemd-logind[1946]: New session 10 of user core. Nov 4 04:21:55.401579 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 04:21:55.592091 sshd[5433]: Connection closed by 147.75.109.163 port 40004 Nov 4 04:21:55.593272 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Nov 4 04:21:55.599674 containerd[1975]: time="2025-11-04T04:21:55.599465399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:55.601782 containerd[1975]: time="2025-11-04T04:21:55.601718399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:21:55.602065 containerd[1975]: time="2025-11-04T04:21:55.601732811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:55.602603 kubelet[3429]: E1104 04:21:55.602210 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:21:55.602603 kubelet[3429]: E1104 04:21:55.602278 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:21:55.604853 kubelet[3429]: E1104 04:21:55.603283 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvnjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f8f67444-mj5dt_calico-apiserver(a361dba4-7339-43be-b37d-2bd7902bcd31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:55.605637 kubelet[3429]: E1104 04:21:55.605071 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:21:55.607575 systemd[1]: sshd@9-172.31.28.40:22-147.75.109.163:40004.service: Deactivated successfully. Nov 4 04:21:55.614769 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 04:21:55.622685 systemd-logind[1946]: Session 10 logged out. Waiting for processes to exit. Nov 4 04:21:55.643505 systemd[1]: Started sshd@10-172.31.28.40:22-147.75.109.163:40016.service - OpenSSH per-connection server daemon (147.75.109.163:40016). Nov 4 04:21:55.648515 systemd-logind[1946]: Removed session 10. Nov 4 04:21:55.841509 sshd[5446]: Accepted publickey for core from 147.75.109.163 port 40016 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:21:55.843954 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:21:55.854722 systemd-logind[1946]: New session 11 of user core. Nov 4 04:21:55.862594 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 04:21:56.140383 sshd[5449]: Connection closed by 147.75.109.163 port 40016 Nov 4 04:21:56.143155 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Nov 4 04:21:56.156204 systemd[1]: sshd@10-172.31.28.40:22-147.75.109.163:40016.service: Deactivated successfully. Nov 4 04:21:56.163441 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 04:21:56.169514 systemd-logind[1946]: Session 11 logged out. Waiting for processes to exit. Nov 4 04:21:56.192793 systemd[1]: Started sshd@11-172.31.28.40:22-147.75.109.163:40030.service - OpenSSH per-connection server daemon (147.75.109.163:40030). Nov 4 04:21:56.199012 systemd-logind[1946]: Removed session 11. Nov 4 04:21:56.325001 containerd[1975]: time="2025-11-04T04:21:56.324957347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:21:56.387761 sshd[5460]: Accepted publickey for core from 147.75.109.163 port 40030 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:21:56.391228 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:21:56.402360 systemd-logind[1946]: New session 12 of user core. Nov 4 04:21:56.407638 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 04:21:56.593279 containerd[1975]: time="2025-11-04T04:21:56.593191956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:56.595732 containerd[1975]: time="2025-11-04T04:21:56.595627560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:21:56.595866 containerd[1975]: time="2025-11-04T04:21:56.595691256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:56.596147 kubelet[3429]: E1104 04:21:56.596080 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:21:56.596147 kubelet[3429]: E1104 04:21:56.596138 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:21:56.597134 kubelet[3429]: E1104 04:21:56.596999 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7p6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:56.600048 containerd[1975]: time="2025-11-04T04:21:56.599989692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:21:56.686879 sshd[5463]: Connection closed by 147.75.109.163 port 40030 Nov 4 04:21:56.687813 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Nov 4 04:21:56.696115 systemd[1]: sshd@11-172.31.28.40:22-147.75.109.163:40030.service: Deactivated successfully. Nov 4 04:21:56.700303 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 04:21:56.704604 systemd-logind[1946]: Session 12 logged out. Waiting for processes to exit. Nov 4 04:21:56.707909 systemd-logind[1946]: Removed session 12. Nov 4 04:21:56.884603 containerd[1975]: time="2025-11-04T04:21:56.884378786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:56.887859 containerd[1975]: time="2025-11-04T04:21:56.887705318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:21:56.888010 containerd[1975]: time="2025-11-04T04:21:56.887740214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:56.888142 kubelet[3429]: E1104 04:21:56.888045 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:21:56.889236 kubelet[3429]: E1104 04:21:56.888133 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:21:56.889236 kubelet[3429]: E1104 04:21:56.889055 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7p6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:56.890686 kubelet[3429]: E1104 04:21:56.890577 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:21:57.322063 containerd[1975]: time="2025-11-04T04:21:57.320954520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:21:57.599357 containerd[1975]: time="2025-11-04T04:21:57.599040697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:21:57.601348 containerd[1975]: time="2025-11-04T04:21:57.601217521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:21:57.601466 containerd[1975]: time="2025-11-04T04:21:57.601400101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:21:57.601776 kubelet[3429]: E1104 04:21:57.601725 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:21:57.601929 kubelet[3429]: E1104 04:21:57.601900 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:21:57.602271 kubelet[3429]: E1104 04:21:57.602178 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr6tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6fc896cb84-m6mvd_calico-system(1e2e1aa1-fbd0-4783-998f-e142a3f6eab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:21:57.604500 kubelet[3429]: E1104 04:21:57.604420 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:22:01.321352 containerd[1975]: time="2025-11-04T04:22:01.321159556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:22:01.613858 containerd[1975]: time="2025-11-04T04:22:01.613494401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:01.615881 containerd[1975]: time="2025-11-04T04:22:01.615721265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:22:01.615881 containerd[1975]: time="2025-11-04T04:22:01.615743045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:01.616252 kubelet[3429]: E1104 04:22:01.616018 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:22:01.616252 kubelet[3429]: E1104 04:22:01.616075 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:22:01.618000 kubelet[3429]: E1104 04:22:01.616382 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbhv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x7r2n_calico-system(3bfc783e-7624-4984-a658-a4dceb99c885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:01.618740 kubelet[3429]: E1104 04:22:01.618126 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:22:01.619597 containerd[1975]: time="2025-11-04T04:22:01.619494041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:22:01.729632 systemd[1]: Started sshd@12-172.31.28.40:22-147.75.109.163:58546.service - OpenSSH per-connection server daemon (147.75.109.163:58546). Nov 4 04:22:01.894820 containerd[1975]: time="2025-11-04T04:22:01.894549319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:01.897431 containerd[1975]: time="2025-11-04T04:22:01.897298975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:22:01.897606 containerd[1975]: time="2025-11-04T04:22:01.897354055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:01.897914 kubelet[3429]: E1104 04:22:01.897854 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:22:01.898032 kubelet[3429]: E1104 04:22:01.897924 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:22:01.899068 kubelet[3429]: E1104 04:22:01.898929 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pdv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f8f67444-smqxz_calico-apiserver(073104c0-4d4a-4e6b-bb61-421cfcd8940e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:01.900661 kubelet[3429]: E1104 04:22:01.900436 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:22:01.946166 sshd[5483]: Accepted publickey for core from 147.75.109.163 port 58546 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:01.950224 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:01.965424 systemd-logind[1946]: New session 13 of user core. Nov 4 04:22:01.973655 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 04:22:02.267882 sshd[5489]: Connection closed by 147.75.109.163 port 58546 Nov 4 04:22:02.271421 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:02.278488 systemd-logind[1946]: Session 13 logged out. Waiting for processes to exit. Nov 4 04:22:02.279296 systemd[1]: sshd@12-172.31.28.40:22-147.75.109.163:58546.service: Deactivated successfully. Nov 4 04:22:02.286475 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 04:22:02.292457 systemd-logind[1946]: Removed session 13. Nov 4 04:22:06.323571 kubelet[3429]: E1104 04:22:06.323294 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6597d6c5c6-2r6d9" podUID="857124b9-a647-4dd7-9ce8-99328261c03d" Nov 4 04:22:07.313191 systemd[1]: Started sshd@13-172.31.28.40:22-147.75.109.163:58560.service - OpenSSH per-connection server daemon (147.75.109.163:58560). Nov 4 04:22:07.321387 kubelet[3429]: E1104 04:22:07.320993 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:22:07.517645 sshd[5507]: Accepted publickey for core from 147.75.109.163 port 58560 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:07.520087 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:07.528547 systemd-logind[1946]: New session 14 of user core. Nov 4 04:22:07.540573 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 04:22:07.745881 sshd[5510]: Connection closed by 147.75.109.163 port 58560 Nov 4 04:22:07.746772 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:07.754311 systemd[1]: sshd@13-172.31.28.40:22-147.75.109.163:58560.service: Deactivated successfully. Nov 4 04:22:07.759781 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 04:22:07.762193 systemd-logind[1946]: Session 14 logged out. Waiting for processes to exit. Nov 4 04:22:07.765416 systemd-logind[1946]: Removed session 14. Nov 4 04:22:10.324266 kubelet[3429]: E1104 04:22:10.324185 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:22:12.323687 kubelet[3429]: E1104 04:22:12.322352 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:22:12.787821 systemd[1]: Started sshd@14-172.31.28.40:22-147.75.109.163:46954.service - OpenSSH per-connection server daemon (147.75.109.163:46954). Nov 4 04:22:12.981537 sshd[5551]: Accepted publickey for core from 147.75.109.163 port 46954 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:12.984613 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:12.993459 systemd-logind[1946]: New session 15 of user core. Nov 4 04:22:13.001610 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 04:22:13.197422 sshd[5554]: Connection closed by 147.75.109.163 port 46954 Nov 4 04:22:13.198508 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:13.205763 systemd[1]: sshd@14-172.31.28.40:22-147.75.109.163:46954.service: Deactivated successfully. Nov 4 04:22:13.211120 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 04:22:13.212823 systemd-logind[1946]: Session 15 logged out. Waiting for processes to exit. Nov 4 04:22:13.217427 systemd-logind[1946]: Removed session 15. Nov 4 04:22:14.321236 kubelet[3429]: E1104 04:22:14.321086 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:22:14.321236 kubelet[3429]: E1104 04:22:14.321156 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:22:17.323672 containerd[1975]: time="2025-11-04T04:22:17.322434163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 04:22:17.583167 containerd[1975]: time="2025-11-04T04:22:17.582992828Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:17.586152 containerd[1975]: time="2025-11-04T04:22:17.586093280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 04:22:17.586615 containerd[1975]: time="2025-11-04T04:22:17.586176968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:17.586781 kubelet[3429]: E1104 04:22:17.586593 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:22:17.586781 kubelet[3429]: E1104 04:22:17.586651 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 04:22:17.587753 kubelet[3429]: E1104 04:22:17.586816 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c43005d46d6641cd888d007749657aec,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5jvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6597d6c5c6-2r6d9_calico-system(857124b9-a647-4dd7-9ce8-99328261c03d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:17.590582 containerd[1975]: time="2025-11-04T04:22:17.590184608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 04:22:17.866442 containerd[1975]: time="2025-11-04T04:22:17.865774066Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:17.868223 containerd[1975]: time="2025-11-04T04:22:17.868083838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 04:22:17.868223 containerd[1975]: time="2025-11-04T04:22:17.868150690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:17.868509 kubelet[3429]: E1104 04:22:17.868448 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:22:17.868621 kubelet[3429]: E1104 04:22:17.868541 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 04:22:17.868959 kubelet[3429]: E1104 04:22:17.868818 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5jvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6597d6c5c6-2r6d9_calico-system(857124b9-a647-4dd7-9ce8-99328261c03d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:17.870617 kubelet[3429]: E1104 04:22:17.870544 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6597d6c5c6-2r6d9" podUID="857124b9-a647-4dd7-9ce8-99328261c03d" Nov 4 04:22:18.239393 systemd[1]: Started sshd@15-172.31.28.40:22-147.75.109.163:46966.service - OpenSSH per-connection server daemon (147.75.109.163:46966). Nov 4 04:22:18.324801 containerd[1975]: time="2025-11-04T04:22:18.324547232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:22:18.439311 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 46966 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:18.442877 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:18.453517 systemd-logind[1946]: New session 16 of user core. Nov 4 04:22:18.459601 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 04:22:18.637475 containerd[1975]: time="2025-11-04T04:22:18.637297018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:18.639895 containerd[1975]: time="2025-11-04T04:22:18.639828418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:22:18.642477 kubelet[3429]: E1104 04:22:18.640474 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:22:18.643010 containerd[1975]: time="2025-11-04T04:22:18.640057126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:18.645143 kubelet[3429]: E1104 04:22:18.644418 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:22:18.645534 kubelet[3429]: E1104 04:22:18.645449 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvnjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f8f67444-mj5dt_calico-apiserver(a361dba4-7339-43be-b37d-2bd7902bcd31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:18.647657 kubelet[3429]: E1104 04:22:18.647576 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:22:18.672270 sshd[5572]: Connection closed by 147.75.109.163 port 46966 Nov 4 04:22:18.674159 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:18.685760 systemd[1]: sshd@15-172.31.28.40:22-147.75.109.163:46966.service: Deactivated successfully. Nov 4 04:22:18.695936 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 04:22:18.698726 systemd-logind[1946]: Session 16 logged out. Waiting for processes to exit. Nov 4 04:22:18.722249 systemd[1]: Started sshd@16-172.31.28.40:22-147.75.109.163:46978.service - OpenSSH per-connection server daemon (147.75.109.163:46978). Nov 4 04:22:18.725723 systemd-logind[1946]: Removed session 16. Nov 4 04:22:18.928560 sshd[5585]: Accepted publickey for core from 147.75.109.163 port 46978 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:18.931900 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:18.942922 systemd-logind[1946]: New session 17 of user core. Nov 4 04:22:18.951606 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 04:22:19.494921 sshd[5588]: Connection closed by 147.75.109.163 port 46978 Nov 4 04:22:19.495773 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:19.506742 systemd[1]: sshd@16-172.31.28.40:22-147.75.109.163:46978.service: Deactivated successfully. Nov 4 04:22:19.514479 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 04:22:19.520706 systemd-logind[1946]: Session 17 logged out. Waiting for processes to exit. Nov 4 04:22:19.544698 systemd[1]: Started sshd@17-172.31.28.40:22-147.75.109.163:46988.service - OpenSSH per-connection server daemon (147.75.109.163:46988). Nov 4 04:22:19.549357 systemd-logind[1946]: Removed session 17. Nov 4 04:22:19.755611 sshd[5598]: Accepted publickey for core from 147.75.109.163 port 46988 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:19.759121 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:19.770694 systemd-logind[1946]: New session 18 of user core. Nov 4 04:22:19.777726 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 04:22:21.179543 sshd[5601]: Connection closed by 147.75.109.163 port 46988 Nov 4 04:22:21.180543 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:21.191664 systemd[1]: sshd@17-172.31.28.40:22-147.75.109.163:46988.service: Deactivated successfully. Nov 4 04:22:21.201043 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 04:22:21.204076 systemd-logind[1946]: Session 18 logged out. Waiting for processes to exit. Nov 4 04:22:21.234824 systemd[1]: Started sshd@18-172.31.28.40:22-147.75.109.163:50452.service - OpenSSH per-connection server daemon (147.75.109.163:50452). Nov 4 04:22:21.238156 systemd-logind[1946]: Removed session 18. Nov 4 04:22:21.322948 containerd[1975]: time="2025-11-04T04:22:21.322890383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 04:22:21.439161 sshd[5633]: Accepted publickey for core from 147.75.109.163 port 50452 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:21.441937 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:21.452603 systemd-logind[1946]: New session 19 of user core. Nov 4 04:22:21.459699 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 04:22:21.586348 containerd[1975]: time="2025-11-04T04:22:21.585668832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:21.587959 containerd[1975]: time="2025-11-04T04:22:21.587895804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 04:22:21.588258 containerd[1975]: time="2025-11-04T04:22:21.587915892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:21.590397 kubelet[3429]: E1104 04:22:21.588496 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:22:21.590960 kubelet[3429]: E1104 04:22:21.590431 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 04:22:21.590960 kubelet[3429]: E1104 04:22:21.590630 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7p6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:21.595678 containerd[1975]: time="2025-11-04T04:22:21.595604580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 04:22:21.841490 containerd[1975]: time="2025-11-04T04:22:21.841288586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:21.844647 containerd[1975]: time="2025-11-04T04:22:21.844558802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 04:22:21.845196 containerd[1975]: time="2025-11-04T04:22:21.844610750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:21.846204 kubelet[3429]: E1104 04:22:21.844917 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:22:21.846204 kubelet[3429]: E1104 04:22:21.844980 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 04:22:21.846204 kubelet[3429]: E1104 04:22:21.845142 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7p6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bjdkx_calico-system(27fda10a-3169-4bf6-a620-503cc9dcb069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:21.847346 kubelet[3429]: E1104 04:22:21.847262 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:22:22.079374 sshd[5639]: Connection closed by 147.75.109.163 port 50452 Nov 4 04:22:22.080654 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:22.096003 systemd[1]: sshd@18-172.31.28.40:22-147.75.109.163:50452.service: Deactivated successfully. Nov 4 04:22:22.101493 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 04:22:22.106210 systemd-logind[1946]: Session 19 logged out. Waiting for processes to exit. Nov 4 04:22:22.127506 systemd[1]: Started sshd@19-172.31.28.40:22-147.75.109.163:50466.service - OpenSSH per-connection server daemon (147.75.109.163:50466). Nov 4 04:22:22.128711 systemd-logind[1946]: Removed session 19. Nov 4 04:22:22.335483 sshd[5656]: Accepted publickey for core from 147.75.109.163 port 50466 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:22.340771 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:22.356268 systemd-logind[1946]: New session 20 of user core. Nov 4 04:22:22.362660 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 04:22:22.637301 sshd[5659]: Connection closed by 147.75.109.163 port 50466 Nov 4 04:22:22.641141 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:22.650968 systemd[1]: sshd@19-172.31.28.40:22-147.75.109.163:50466.service: Deactivated successfully. Nov 4 04:22:22.654975 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 04:22:22.657586 systemd-logind[1946]: Session 20 logged out. Waiting for processes to exit. Nov 4 04:22:22.663116 systemd-logind[1946]: Removed session 20. Nov 4 04:22:23.322354 containerd[1975]: time="2025-11-04T04:22:23.321594637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 04:22:23.608508 containerd[1975]: time="2025-11-04T04:22:23.608239382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:23.610556 containerd[1975]: time="2025-11-04T04:22:23.610473398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 04:22:23.610888 containerd[1975]: time="2025-11-04T04:22:23.610606790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:23.610977 kubelet[3429]: E1104 04:22:23.610876 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:22:23.610977 kubelet[3429]: E1104 04:22:23.610936 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 04:22:23.611872 kubelet[3429]: E1104 04:22:23.611749 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr6tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6fc896cb84-m6mvd_calico-system(1e2e1aa1-fbd0-4783-998f-e142a3f6eab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:23.613394 kubelet[3429]: E1104 04:22:23.613300 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:22:27.323742 containerd[1975]: time="2025-11-04T04:22:27.323672309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 04:22:27.599342 containerd[1975]: time="2025-11-04T04:22:27.598164258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:27.601260 containerd[1975]: time="2025-11-04T04:22:27.601123422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 04:22:27.601482 containerd[1975]: time="2025-11-04T04:22:27.601163718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:27.603352 kubelet[3429]: E1104 04:22:27.602487 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:22:27.603352 kubelet[3429]: E1104 04:22:27.602558 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 04:22:27.603352 kubelet[3429]: E1104 04:22:27.602754 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbhv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x7r2n_calico-system(3bfc783e-7624-4984-a658-a4dceb99c885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:27.606496 kubelet[3429]: E1104 04:22:27.606436 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:22:27.678503 systemd[1]: Started sshd@20-172.31.28.40:22-147.75.109.163:50472.service - OpenSSH per-connection server daemon (147.75.109.163:50472). Nov 4 04:22:27.878091 sshd[5673]: Accepted publickey for core from 147.75.109.163 port 50472 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:27.881265 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:27.895682 systemd-logind[1946]: New session 21 of user core. Nov 4 04:22:27.901983 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 04:22:28.138042 sshd[5678]: Connection closed by 147.75.109.163 port 50472 Nov 4 04:22:28.138472 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:28.150167 systemd[1]: sshd@20-172.31.28.40:22-147.75.109.163:50472.service: Deactivated successfully. Nov 4 04:22:28.155013 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 04:22:28.159678 systemd-logind[1946]: Session 21 logged out. Waiting for processes to exit. Nov 4 04:22:28.163486 systemd-logind[1946]: Removed session 21. Nov 4 04:22:29.324412 containerd[1975]: time="2025-11-04T04:22:29.322599199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 04:22:29.574761 containerd[1975]: time="2025-11-04T04:22:29.574436540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 04:22:29.576827 containerd[1975]: time="2025-11-04T04:22:29.576744824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 04:22:29.576957 containerd[1975]: time="2025-11-04T04:22:29.576876092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 04:22:29.577267 kubelet[3429]: E1104 04:22:29.577205 3429 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:22:29.578507 kubelet[3429]: E1104 04:22:29.577280 3429 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 04:22:29.578507 kubelet[3429]: E1104 04:22:29.578045 3429 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pdv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67f8f67444-smqxz_calico-apiserver(073104c0-4d4a-4e6b-bb61-421cfcd8940e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 04:22:29.579582 kubelet[3429]: E1104 04:22:29.579426 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:22:32.329065 kubelet[3429]: E1104 04:22:32.328978 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6597d6c5c6-2r6d9" podUID="857124b9-a647-4dd7-9ce8-99328261c03d" Nov 4 04:22:33.184401 systemd[1]: Started sshd@21-172.31.28.40:22-147.75.109.163:57322.service - OpenSSH per-connection server daemon (147.75.109.163:57322). Nov 4 04:22:33.399492 sshd[5692]: Accepted publickey for core from 147.75.109.163 port 57322 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:33.404138 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:33.415666 systemd-logind[1946]: New session 22 of user core. Nov 4 04:22:33.424916 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 04:22:33.633961 sshd[5696]: Connection closed by 147.75.109.163 port 57322 Nov 4 04:22:33.635081 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:33.644675 systemd[1]: sshd@21-172.31.28.40:22-147.75.109.163:57322.service: Deactivated successfully. Nov 4 04:22:33.650000 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 04:22:33.653252 systemd-logind[1946]: Session 22 logged out. Waiting for processes to exit. Nov 4 04:22:33.658297 systemd-logind[1946]: Removed session 22. Nov 4 04:22:34.325582 kubelet[3429]: E1104 04:22:34.325509 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:22:34.328013 kubelet[3429]: E1104 04:22:34.327933 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:22:38.323493 kubelet[3429]: E1104 04:22:38.323218 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:22:38.674831 systemd[1]: Started sshd@22-172.31.28.40:22-147.75.109.163:57334.service - OpenSSH per-connection server daemon (147.75.109.163:57334). Nov 4 04:22:38.918364 sshd[5710]: Accepted publickey for core from 147.75.109.163 port 57334 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:38.922360 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:38.936869 systemd-logind[1946]: New session 23 of user core. Nov 4 04:22:38.947759 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 04:22:39.194432 sshd[5736]: Connection closed by 147.75.109.163 port 57334 Nov 4 04:22:39.196096 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:39.207190 systemd[1]: sshd@22-172.31.28.40:22-147.75.109.163:57334.service: Deactivated successfully. Nov 4 04:22:39.216641 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 04:22:39.224804 systemd-logind[1946]: Session 23 logged out. Waiting for processes to exit. Nov 4 04:22:39.229818 systemd-logind[1946]: Removed session 23. Nov 4 04:22:40.324188 kubelet[3429]: E1104 04:22:40.324040 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:22:44.236818 systemd[1]: Started sshd@23-172.31.28.40:22-147.75.109.163:55236.service - OpenSSH per-connection server daemon (147.75.109.163:55236). Nov 4 04:22:44.321712 kubelet[3429]: E1104 04:22:44.321646 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e" Nov 4 04:22:44.451024 sshd[5754]: Accepted publickey for core from 147.75.109.163 port 55236 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:44.454535 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:44.464203 systemd-logind[1946]: New session 24 of user core. Nov 4 04:22:44.476653 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 04:22:44.691371 sshd[5757]: Connection closed by 147.75.109.163 port 55236 Nov 4 04:22:44.692186 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:44.701130 systemd[1]: sshd@23-172.31.28.40:22-147.75.109.163:55236.service: Deactivated successfully. Nov 4 04:22:44.706958 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 04:22:44.711643 systemd-logind[1946]: Session 24 logged out. Waiting for processes to exit. Nov 4 04:22:44.715811 systemd-logind[1946]: Removed session 24. Nov 4 04:22:45.321228 kubelet[3429]: E1104 04:22:45.320385 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:22:47.335451 kubelet[3429]: E1104 04:22:47.334760 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6597d6c5c6-2r6d9" podUID="857124b9-a647-4dd7-9ce8-99328261c03d" Nov 4 04:22:48.321981 kubelet[3429]: E1104 04:22:48.321797 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bjdkx" podUID="27fda10a-3169-4bf6-a620-503cc9dcb069" Nov 4 04:22:49.730206 systemd[1]: Started sshd@24-172.31.28.40:22-147.75.109.163:55250.service - OpenSSH per-connection server daemon (147.75.109.163:55250). Nov 4 04:22:49.924983 sshd[5770]: Accepted publickey for core from 147.75.109.163 port 55250 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:49.927425 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:49.936783 systemd-logind[1946]: New session 25 of user core. Nov 4 04:22:49.945225 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 04:22:50.200570 sshd[5773]: Connection closed by 147.75.109.163 port 55250 Nov 4 04:22:50.202675 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:50.215074 systemd[1]: sshd@24-172.31.28.40:22-147.75.109.163:55250.service: Deactivated successfully. Nov 4 04:22:50.223869 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 04:22:50.226794 systemd-logind[1946]: Session 25 logged out. Waiting for processes to exit. Nov 4 04:22:50.231206 systemd-logind[1946]: Removed session 25. Nov 4 04:22:50.323400 kubelet[3429]: E1104 04:22:50.323114 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fc896cb84-m6mvd" podUID="1e2e1aa1-fbd0-4783-998f-e142a3f6eab3" Nov 4 04:22:55.241811 systemd[1]: Started sshd@25-172.31.28.40:22-147.75.109.163:34054.service - OpenSSH per-connection server daemon (147.75.109.163:34054). Nov 4 04:22:55.323039 kubelet[3429]: E1104 04:22:55.322960 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x7r2n" podUID="3bfc783e-7624-4984-a658-a4dceb99c885" Nov 4 04:22:55.444002 sshd[5786]: Accepted publickey for core from 147.75.109.163 port 34054 ssh2: RSA SHA256:qS1XafufLHAd70xqW4Vvg5dQ+JBmoPq0koj1S4P4qJk Nov 4 04:22:55.446728 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:22:55.457969 systemd-logind[1946]: New session 26 of user core. Nov 4 04:22:55.463612 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 04:22:55.678410 sshd[5789]: Connection closed by 147.75.109.163 port 34054 Nov 4 04:22:55.679560 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Nov 4 04:22:55.690697 systemd[1]: sshd@25-172.31.28.40:22-147.75.109.163:34054.service: Deactivated successfully. Nov 4 04:22:55.696397 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 04:22:55.701440 systemd-logind[1946]: Session 26 logged out. Waiting for processes to exit. Nov 4 04:22:55.704950 systemd-logind[1946]: Removed session 26. Nov 4 04:22:56.324235 kubelet[3429]: E1104 04:22:56.324152 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-mj5dt" podUID="a361dba4-7339-43be-b37d-2bd7902bcd31" Nov 4 04:22:57.320355 kubelet[3429]: E1104 04:22:57.320218 3429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f8f67444-smqxz" podUID="073104c0-4d4a-4e6b-bb61-421cfcd8940e"