Nov 23 22:56:58.125942 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 23 22:56:58.125986 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:49:09 -00 2025 Nov 23 22:56:58.128048 kernel: KASLR disabled due to lack of seed Nov 23 22:56:58.128090 kernel: efi: EFI v2.7 by EDK II Nov 23 22:56:58.128107 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Nov 23 22:56:58.128122 kernel: secureboot: Secure boot disabled Nov 23 22:56:58.128139 kernel: ACPI: Early table checksum verification disabled Nov 23 22:56:58.128154 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 23 22:56:58.128169 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 23 22:56:58.128184 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 23 22:56:58.128199 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 23 22:56:58.128223 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 23 22:56:58.128238 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 23 22:56:58.128253 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 23 22:56:58.128270 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 23 22:56:58.128286 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 23 22:56:58.128306 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 23 22:56:58.128322 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 23 22:56:58.128337 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 23 22:56:58.128353 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 23 22:56:58.128369 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 23 22:56:58.128384 kernel: printk: legacy bootconsole [uart0] enabled Nov 23 22:56:58.128401 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 22:56:58.128417 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 22:56:58.128433 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 23 22:56:58.128449 kernel: Zone ranges: Nov 23 22:56:58.128466 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 22:56:58.128486 kernel: DMA32 empty Nov 23 22:56:58.128505 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 23 22:56:58.128521 kernel: Device empty Nov 23 22:56:58.128537 kernel: Movable zone start for each node Nov 23 22:56:58.128552 kernel: Early memory node ranges Nov 23 22:56:58.128568 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 23 22:56:58.128584 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 23 22:56:58.128600 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 23 22:56:58.128615 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 23 22:56:58.128631 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 23 22:56:58.128647 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 23 22:56:58.128663 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 23 22:56:58.128683 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 23 22:56:58.128705 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 22:56:58.128722 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 23 22:56:58.128739 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 23 22:56:58.128755 kernel: psci: probing for conduit method from ACPI. Nov 23 22:56:58.128775 kernel: psci: PSCIv1.0 detected in firmware. Nov 23 22:56:58.128791 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 22:56:58.128808 kernel: psci: Trusted OS migration not required Nov 23 22:56:58.128824 kernel: psci: SMC Calling Convention v1.1 Nov 23 22:56:58.128841 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 23 22:56:58.128857 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 22:56:58.128873 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 22:56:58.128890 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 22:56:58.128907 kernel: Detected PIPT I-cache on CPU0 Nov 23 22:56:58.128923 kernel: CPU features: detected: GIC system register CPU interface Nov 23 22:56:58.128940 kernel: CPU features: detected: Spectre-v2 Nov 23 22:56:58.128959 kernel: CPU features: detected: Spectre-v3a Nov 23 22:56:58.128976 kernel: CPU features: detected: Spectre-BHB Nov 23 22:56:58.128992 kernel: CPU features: detected: ARM erratum 1742098 Nov 23 22:56:58.129028 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 23 22:56:58.129052 kernel: alternatives: applying boot alternatives Nov 23 22:56:58.129071 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:56:58.129088 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 22:56:58.129105 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 22:56:58.129122 kernel: Fallback order for Node 0: 0 Nov 23 22:56:58.129138 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 23 22:56:58.129155 kernel: Policy zone: Normal Nov 23 22:56:58.129178 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 22:56:58.129195 kernel: software IO TLB: area num 2. Nov 23 22:56:58.129213 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Nov 23 22:56:58.129229 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 22:56:58.129246 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 22:56:58.129263 kernel: rcu: RCU event tracing is enabled. Nov 23 22:56:58.129280 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 22:56:58.129296 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 22:56:58.129313 kernel: Tracing variant of Tasks RCU enabled. Nov 23 22:56:58.129329 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 22:56:58.129346 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 22:56:58.129366 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:56:58.129383 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:56:58.129399 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 22:56:58.129415 kernel: GICv3: 96 SPIs implemented Nov 23 22:56:58.129431 kernel: GICv3: 0 Extended SPIs implemented Nov 23 22:56:58.129448 kernel: Root IRQ handler: gic_handle_irq Nov 23 22:56:58.129464 kernel: GICv3: GICv3 features: 16 PPIs Nov 23 22:56:58.129480 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 22:56:58.129496 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 23 22:56:58.129513 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 23 22:56:58.129529 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 23 22:56:58.129547 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 23 22:56:58.129568 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 23 22:56:58.129584 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 23 22:56:58.129601 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 23 22:56:58.129617 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 22:56:58.129633 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 23 22:56:58.129650 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 23 22:56:58.129667 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 23 22:56:58.129683 kernel: Console: colour dummy device 80x25 Nov 23 22:56:58.129700 kernel: printk: legacy console [tty1] enabled Nov 23 22:56:58.129717 kernel: ACPI: Core revision 20240827 Nov 23 22:56:58.129734 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 23 22:56:58.129755 kernel: pid_max: default: 32768 minimum: 301 Nov 23 22:56:58.129771 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 22:56:58.129788 kernel: landlock: Up and running. Nov 23 22:56:58.129804 kernel: SELinux: Initializing. Nov 23 22:56:58.129821 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:56:58.129838 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:56:58.129854 kernel: rcu: Hierarchical SRCU implementation. Nov 23 22:56:58.129871 kernel: rcu: Max phase no-delay instances is 400. Nov 23 22:56:58.129892 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 22:56:58.129909 kernel: Remapping and enabling EFI services. Nov 23 22:56:58.129926 kernel: smp: Bringing up secondary CPUs ... Nov 23 22:56:58.129942 kernel: Detected PIPT I-cache on CPU1 Nov 23 22:56:58.129959 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 23 22:56:58.129977 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 23 22:56:58.130007 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 23 22:56:58.130053 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 22:56:58.130071 kernel: SMP: Total of 2 processors activated. Nov 23 22:56:58.130095 kernel: CPU: All CPU(s) started at EL1 Nov 23 22:56:58.130144 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 22:56:58.130184 kernel: CPU features: detected: 32-bit EL1 Support Nov 23 22:56:58.130210 kernel: CPU features: detected: CRC32 instructions Nov 23 22:56:58.130228 kernel: alternatives: applying system-wide alternatives Nov 23 22:56:58.130248 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Nov 23 22:56:58.130266 kernel: devtmpfs: initialized Nov 23 22:56:58.130284 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 22:56:58.130307 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 22:56:58.130325 kernel: 16880 pages in range for non-PLT usage Nov 23 22:56:58.130342 kernel: 508400 pages in range for PLT usage Nov 23 22:56:58.130360 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 22:56:58.130377 kernel: SMBIOS 3.0.0 present. Nov 23 22:56:58.130394 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 23 22:56:58.130412 kernel: DMI: Memory slots populated: 0/0 Nov 23 22:56:58.130430 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 22:56:58.130447 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 22:56:58.130469 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 22:56:58.130486 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 22:56:58.130504 kernel: audit: initializing netlink subsys (disabled) Nov 23 22:56:58.130522 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Nov 23 22:56:58.130539 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 22:56:58.130557 kernel: cpuidle: using governor menu Nov 23 22:56:58.130574 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 22:56:58.130592 kernel: ASID allocator initialised with 65536 entries Nov 23 22:56:58.130610 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 22:56:58.130631 kernel: Serial: AMBA PL011 UART driver Nov 23 22:56:58.130649 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 22:56:58.130666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 22:56:58.130683 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 22:56:58.130701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 22:56:58.130718 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 22:56:58.130736 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 22:56:58.130753 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 22:56:58.130771 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 22:56:58.130792 kernel: ACPI: Added _OSI(Module Device) Nov 23 22:56:58.130809 kernel: ACPI: Added _OSI(Processor Device) Nov 23 22:56:58.130827 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 22:56:58.130844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 22:56:58.130862 kernel: ACPI: Interpreter enabled Nov 23 22:56:58.130879 kernel: ACPI: Using GIC for interrupt routing Nov 23 22:56:58.130896 kernel: ACPI: MCFG table detected, 1 entries Nov 23 22:56:58.130914 kernel: ACPI: CPU0 has been hot-added Nov 23 22:56:58.130931 kernel: ACPI: CPU1 has been hot-added Nov 23 22:56:58.130952 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Nov 23 22:56:58.131285 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 22:56:58.131515 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 22:56:58.133326 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 22:56:58.133525 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Nov 23 22:56:58.133705 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Nov 23 22:56:58.133730 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 23 22:56:58.133756 kernel: acpiphp: Slot [1] registered Nov 23 22:56:58.133775 kernel: acpiphp: Slot [2] registered Nov 23 22:56:58.133792 kernel: acpiphp: Slot [3] registered Nov 23 22:56:58.133809 kernel: acpiphp: Slot [4] registered Nov 23 22:56:58.133827 kernel: acpiphp: Slot [5] registered Nov 23 22:56:58.133844 kernel: acpiphp: Slot [6] registered Nov 23 22:56:58.133861 kernel: acpiphp: Slot [7] registered Nov 23 22:56:58.133879 kernel: acpiphp: Slot [8] registered Nov 23 22:56:58.133896 kernel: acpiphp: Slot [9] registered Nov 23 22:56:58.133913 kernel: acpiphp: Slot [10] registered Nov 23 22:56:58.133934 kernel: acpiphp: Slot [11] registered Nov 23 22:56:58.133952 kernel: acpiphp: Slot [12] registered Nov 23 22:56:58.133969 kernel: acpiphp: Slot [13] registered Nov 23 22:56:58.133986 kernel: acpiphp: Slot [14] registered Nov 23 22:56:58.134004 kernel: acpiphp: Slot [15] registered Nov 23 22:56:58.134059 kernel: acpiphp: Slot [16] registered Nov 23 22:56:58.134079 kernel: acpiphp: Slot [17] registered Nov 23 22:56:58.134097 kernel: acpiphp: Slot [18] registered Nov 23 22:56:58.134114 kernel: acpiphp: Slot [19] registered Nov 23 22:56:58.134138 kernel: acpiphp: Slot [20] registered Nov 23 22:56:58.134155 kernel: acpiphp: Slot [21] registered Nov 23 22:56:58.134173 kernel: acpiphp: Slot [22] registered Nov 23 22:56:58.134191 kernel: acpiphp: Slot [23] registered Nov 23 22:56:58.134208 kernel: acpiphp: Slot [24] registered Nov 23 22:56:58.134227 kernel: acpiphp: Slot [25] registered Nov 23 22:56:58.134244 kernel: acpiphp: Slot [26] registered Nov 23 22:56:58.134261 kernel: acpiphp: Slot [27] registered Nov 23 22:56:58.134279 kernel: acpiphp: Slot [28] registered Nov 23 22:56:58.134296 kernel: acpiphp: Slot [29] registered Nov 23 22:56:58.134318 kernel: acpiphp: Slot [30] registered Nov 23 22:56:58.134335 kernel: acpiphp: Slot [31] registered Nov 23 22:56:58.134353 kernel: PCI host bridge to bus 0000:00 Nov 23 22:56:58.134555 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 23 22:56:58.134728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 22:56:58.134897 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 23 22:56:58.136206 kernel: pci_bus 0000:00: root bus resource [bus 00] Nov 23 22:56:58.137167 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 23 22:56:58.137435 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 23 22:56:58.137627 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 23 22:56:58.137834 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 23 22:56:58.138049 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 23 22:56:58.138353 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 22:56:58.141641 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 23 22:56:58.141852 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 23 22:56:58.142080 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 23 22:56:58.142277 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 23 22:56:58.142465 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 22:56:58.142642 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 23 22:56:58.142809 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 22:56:58.142985 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 23 22:56:58.144402 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 22:56:58.144441 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 22:56:58.144460 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 22:56:58.144479 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 22:56:58.144498 kernel: iommu: Default domain type: Translated Nov 23 22:56:58.144516 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 22:56:58.144535 kernel: efivars: Registered efivars operations Nov 23 22:56:58.144552 kernel: vgaarb: loaded Nov 23 22:56:58.144579 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 22:56:58.144597 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 22:56:58.144615 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 22:56:58.144632 kernel: pnp: PnP ACPI init Nov 23 22:56:58.144894 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 23 22:56:58.144922 kernel: pnp: PnP ACPI: found 1 devices Nov 23 22:56:58.144941 kernel: NET: Registered PF_INET protocol family Nov 23 22:56:58.144960 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 22:56:58.144984 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 22:56:58.145004 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 22:56:58.145071 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 22:56:58.145091 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 22:56:58.145108 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 22:56:58.145126 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:56:58.145144 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:56:58.145162 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 22:56:58.145179 kernel: PCI: CLS 0 bytes, default 64 Nov 23 22:56:58.145203 kernel: kvm [1]: HYP mode not available Nov 23 22:56:58.145221 kernel: Initialise system trusted keyrings Nov 23 22:56:58.145238 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 22:56:58.145255 kernel: Key type asymmetric registered Nov 23 22:56:58.145273 kernel: Asymmetric key parser 'x509' registered Nov 23 22:56:58.145290 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 22:56:58.145308 kernel: io scheduler mq-deadline registered Nov 23 22:56:58.145325 kernel: io scheduler kyber registered Nov 23 22:56:58.145343 kernel: io scheduler bfq registered Nov 23 22:56:58.145560 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 23 22:56:58.145587 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 22:56:58.145604 kernel: ACPI: button: Power Button [PWRB] Nov 23 22:56:58.145622 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 23 22:56:58.145640 kernel: ACPI: button: Sleep Button [SLPB] Nov 23 22:56:58.145657 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 22:56:58.145676 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 22:56:58.145867 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 23 22:56:58.145896 kernel: printk: legacy console [ttyS0] disabled Nov 23 22:56:58.145914 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 23 22:56:58.145932 kernel: printk: legacy console [ttyS0] enabled Nov 23 22:56:58.145949 kernel: printk: legacy bootconsole [uart0] disabled Nov 23 22:56:58.145966 kernel: thunder_xcv, ver 1.0 Nov 23 22:56:58.145984 kernel: thunder_bgx, ver 1.0 Nov 23 22:56:58.146001 kernel: nicpf, ver 1.0 Nov 23 22:56:58.146040 kernel: nicvf, ver 1.0 Nov 23 22:56:58.146249 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 22:56:58.146429 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T22:56:57 UTC (1763938617) Nov 23 22:56:58.146454 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 22:56:58.146472 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 23 22:56:58.146490 kernel: NET: Registered PF_INET6 protocol family Nov 23 22:56:58.146507 kernel: watchdog: NMI not fully supported Nov 23 22:56:58.146525 kernel: watchdog: Hard watchdog permanently disabled Nov 23 22:56:58.146542 kernel: Segment Routing with IPv6 Nov 23 22:56:58.146560 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 22:56:58.146577 kernel: NET: Registered PF_PACKET protocol family Nov 23 22:56:58.146599 kernel: Key type dns_resolver registered Nov 23 22:56:58.146616 kernel: registered taskstats version 1 Nov 23 22:56:58.146634 kernel: Loading compiled-in X.509 certificates Nov 23 22:56:58.146651 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 98b0841f2908e51633cd38699ad12796cadb7bd1' Nov 23 22:56:58.146668 kernel: Demotion targets for Node 0: null Nov 23 22:56:58.146686 kernel: Key type .fscrypt registered Nov 23 22:56:58.146703 kernel: Key type fscrypt-provisioning registered Nov 23 22:56:58.146720 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 22:56:58.146738 kernel: ima: Allocated hash algorithm: sha1 Nov 23 22:56:58.146760 kernel: ima: No architecture policies found Nov 23 22:56:58.146777 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 22:56:58.146795 kernel: clk: Disabling unused clocks Nov 23 22:56:58.146812 kernel: PM: genpd: Disabling unused power domains Nov 23 22:56:58.146829 kernel: Warning: unable to open an initial console. Nov 23 22:56:58.146848 kernel: Freeing unused kernel memory: 39552K Nov 23 22:56:58.146865 kernel: Run /init as init process Nov 23 22:56:58.146882 kernel: with arguments: Nov 23 22:56:58.146900 kernel: /init Nov 23 22:56:58.146920 kernel: with environment: Nov 23 22:56:58.146938 kernel: HOME=/ Nov 23 22:56:58.146955 kernel: TERM=linux Nov 23 22:56:58.146974 systemd[1]: Successfully made /usr/ read-only. Nov 23 22:56:58.146997 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:56:58.148094 systemd[1]: Detected virtualization amazon. Nov 23 22:56:58.148124 systemd[1]: Detected architecture arm64. Nov 23 22:56:58.148153 systemd[1]: Running in initrd. Nov 23 22:56:58.148173 systemd[1]: No hostname configured, using default hostname. Nov 23 22:56:58.148193 systemd[1]: Hostname set to . Nov 23 22:56:58.148214 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:56:58.148236 systemd[1]: Queued start job for default target initrd.target. Nov 23 22:56:58.148256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:56:58.148275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:56:58.148296 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 22:56:58.148319 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:56:58.148339 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 22:56:58.148359 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 22:56:58.148382 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 22:56:58.148402 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 22:56:58.148422 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:56:58.148441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:56:58.148465 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:56:58.148484 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:56:58.148503 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:56:58.148522 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:56:58.148542 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:56:58.148561 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:56:58.148581 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 22:56:58.148600 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 22:56:58.148619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:56:58.148642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:56:58.148662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:56:58.148681 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:56:58.148700 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 22:56:58.148719 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:56:58.148739 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 22:56:58.148758 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 22:56:58.148778 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 22:56:58.148800 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:56:58.148821 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:56:58.148841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:56:58.148860 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 22:56:58.148880 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:56:58.148950 systemd-journald[256]: Collecting audit messages is disabled. Nov 23 22:56:58.148994 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 22:56:58.150104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:56:58.150140 systemd-journald[256]: Journal started Nov 23 22:56:58.150188 systemd-journald[256]: Runtime Journal (/run/log/journal/ec231d93c8dfff87584d9c8718e46076) is 8M, max 75.3M, 67.3M free. Nov 23 22:56:58.128465 systemd-modules-load[258]: Inserted module 'overlay' Nov 23 22:56:58.163083 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:56:58.169904 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:56:58.183261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 22:56:58.185086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:56:58.195347 kernel: Bridge firewalling registered Nov 23 22:56:58.188317 systemd-modules-load[258]: Inserted module 'br_netfilter' Nov 23 22:56:58.200171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:56:58.215479 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:56:58.226772 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 22:56:58.236048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:56:58.249576 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 22:56:58.257273 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:56:58.272421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:56:58.296960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:56:58.308278 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:56:58.315661 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:56:58.325966 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 22:56:58.337623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:56:58.381562 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:56:58.441539 systemd-resolved[298]: Positive Trust Anchors: Nov 23 22:56:58.442096 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:56:58.442161 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:56:58.579049 kernel: SCSI subsystem initialized Nov 23 22:56:58.585052 kernel: Loading iSCSI transport class v2.0-870. Nov 23 22:56:58.598051 kernel: iscsi: registered transport (tcp) Nov 23 22:56:58.620656 kernel: iscsi: registered transport (qla4xxx) Nov 23 22:56:58.620728 kernel: QLogic iSCSI HBA Driver Nov 23 22:56:58.656220 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:56:58.699969 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:56:58.713883 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:56:58.730131 kernel: random: crng init done Nov 23 22:56:58.730443 systemd-resolved[298]: Defaulting to hostname 'linux'. Nov 23 22:56:58.734043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:56:58.742371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:56:58.819074 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 22:56:58.820883 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 22:56:58.907070 kernel: raid6: neonx8 gen() 6440 MB/s Nov 23 22:56:58.923069 kernel: raid6: neonx4 gen() 6492 MB/s Nov 23 22:56:58.940064 kernel: raid6: neonx2 gen() 5396 MB/s Nov 23 22:56:58.957061 kernel: raid6: neonx1 gen() 3915 MB/s Nov 23 22:56:58.974057 kernel: raid6: int64x8 gen() 3631 MB/s Nov 23 22:56:58.991065 kernel: raid6: int64x4 gen() 3673 MB/s Nov 23 22:56:59.008058 kernel: raid6: int64x2 gen() 3575 MB/s Nov 23 22:56:59.026120 kernel: raid6: int64x1 gen() 2742 MB/s Nov 23 22:56:59.026185 kernel: raid6: using algorithm neonx4 gen() 6492 MB/s Nov 23 22:56:59.045117 kernel: raid6: .... xor() 4869 MB/s, rmw enabled Nov 23 22:56:59.045184 kernel: raid6: using neon recovery algorithm Nov 23 22:56:59.053063 kernel: xor: measuring software checksum speed Nov 23 22:56:59.053141 kernel: 8regs : 11612 MB/sec Nov 23 22:56:59.056555 kernel: 32regs : 11777 MB/sec Nov 23 22:56:59.056593 kernel: arm64_neon : 9205 MB/sec Nov 23 22:56:59.056618 kernel: xor: using function: 32regs (11777 MB/sec) Nov 23 22:56:59.149062 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 22:56:59.162088 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:56:59.173244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:56:59.227211 systemd-udevd[508]: Using default interface naming scheme 'v255'. Nov 23 22:56:59.237478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:56:59.254657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 22:56:59.284979 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Nov 23 22:56:59.331244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:56:59.350145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:56:59.477691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:56:59.495163 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 22:56:59.656783 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 22:56:59.656848 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 22:56:59.656875 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 23 22:56:59.659086 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 23 22:56:59.671575 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 23 22:56:59.671919 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 23 22:56:59.672205 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 23 22:56:59.677597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:56:59.677872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:56:59.687131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:56:59.692684 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:ba:aa:2a:54:db Nov 23 22:56:59.697215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:56:59.701804 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:56:59.715693 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 22:56:59.715735 kernel: GPT:9289727 != 33554431 Nov 23 22:56:59.715761 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 22:56:59.715785 kernel: GPT:9289727 != 33554431 Nov 23 22:56:59.715808 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 22:56:59.715844 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:56:59.722527 (udev-worker)[553]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:56:59.753571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:56:59.772058 kernel: nvme nvme0: using unchecked data buffer Nov 23 22:56:59.900392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 23 22:56:59.970229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 23 22:56:59.976691 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 22:57:00.018243 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 22:57:00.044509 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 23 22:57:00.047580 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 23 22:57:00.058711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:57:00.061704 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:57:00.070734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:57:00.076325 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 22:57:00.081317 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 22:57:00.116004 disk-uuid[690]: Primary Header is updated. Nov 23 22:57:00.116004 disk-uuid[690]: Secondary Entries is updated. Nov 23 22:57:00.116004 disk-uuid[690]: Secondary Header is updated. Nov 23 22:57:00.128079 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:57:00.153083 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:57:00.167064 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:57:01.178062 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:57:01.178634 disk-uuid[692]: The operation has completed successfully. Nov 23 22:57:01.374374 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 22:57:01.375107 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 22:57:01.468007 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 22:57:01.514364 sh[957]: Success Nov 23 22:57:01.542185 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 22:57:01.542265 kernel: device-mapper: uevent: version 1.0.3 Nov 23 22:57:01.544410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 22:57:01.559063 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 22:57:01.650876 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 22:57:01.652719 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 22:57:01.672925 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 22:57:01.701218 kernel: BTRFS: device fsid 9fed50bd-c943-4402-9e9a-f39625143eb9 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (980) Nov 23 22:57:01.705320 kernel: BTRFS info (device dm-0): first mount of filesystem 9fed50bd-c943-4402-9e9a-f39625143eb9 Nov 23 22:57:01.705379 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:01.782983 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 22:57:01.783093 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 22:57:01.783122 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 22:57:01.801917 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 22:57:01.802799 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:57:01.813437 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 22:57:01.819262 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 22:57:01.824054 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 22:57:01.880091 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1015) Nov 23 22:57:01.884487 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:01.884563 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:01.903926 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:57:01.904061 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:57:01.913084 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:01.915078 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 22:57:01.923257 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 22:57:02.043904 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:57:02.056486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:57:02.136137 systemd-networkd[1150]: lo: Link UP Nov 23 22:57:02.136152 systemd-networkd[1150]: lo: Gained carrier Nov 23 22:57:02.143356 systemd-networkd[1150]: Enumeration completed Nov 23 22:57:02.144111 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:57:02.152355 systemd[1]: Reached target network.target - Network. Nov 23 22:57:02.152645 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:02.152653 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:57:02.170397 systemd-networkd[1150]: eth0: Link UP Nov 23 22:57:02.170413 systemd-networkd[1150]: eth0: Gained carrier Nov 23 22:57:02.170438 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:02.192179 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.24.18/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 22:57:02.495192 ignition[1076]: Ignition 2.22.0 Nov 23 22:57:02.495220 ignition[1076]: Stage: fetch-offline Nov 23 22:57:02.496745 ignition[1076]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:02.496771 ignition[1076]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:02.498529 ignition[1076]: Ignition finished successfully Nov 23 22:57:02.513189 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:57:02.522778 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 22:57:02.596122 ignition[1159]: Ignition 2.22.0 Nov 23 22:57:02.596748 ignition[1159]: Stage: fetch Nov 23 22:57:02.597834 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:02.597863 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:02.598059 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:02.622465 ignition[1159]: PUT result: OK Nov 23 22:57:02.628959 ignition[1159]: parsed url from cmdline: "" Nov 23 22:57:02.629136 ignition[1159]: no config URL provided Nov 23 22:57:02.629157 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:57:02.629185 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:57:02.629246 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:02.646700 ignition[1159]: PUT result: OK Nov 23 22:57:02.647128 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 23 22:57:02.652921 ignition[1159]: GET result: OK Nov 23 22:57:02.653589 ignition[1159]: parsing config with SHA512: d552ccabbc5c4dc5a2d776798808ff59e7eb0d6e69ddc670ca0888784a077388af53bdf4e2de63465b0fa274cd2c62875311ff92973f177c2383116bac0b0f83 Nov 23 22:57:02.669987 unknown[1159]: fetched base config from "system" Nov 23 22:57:02.670047 unknown[1159]: fetched base config from "system" Nov 23 22:57:02.670640 ignition[1159]: fetch: fetch complete Nov 23 22:57:02.670062 unknown[1159]: fetched user config from "aws" Nov 23 22:57:02.670652 ignition[1159]: fetch: fetch passed Nov 23 22:57:02.670749 ignition[1159]: Ignition finished successfully Nov 23 22:57:02.688273 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 22:57:02.698883 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 22:57:02.779345 ignition[1166]: Ignition 2.22.0 Nov 23 22:57:02.779896 ignition[1166]: Stage: kargs Nov 23 22:57:02.780559 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:02.780584 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:02.780724 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:02.790687 ignition[1166]: PUT result: OK Nov 23 22:57:02.800180 ignition[1166]: kargs: kargs passed Nov 23 22:57:02.800324 ignition[1166]: Ignition finished successfully Nov 23 22:57:02.805106 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 22:57:02.817646 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 22:57:02.887365 ignition[1173]: Ignition 2.22.0 Nov 23 22:57:02.887439 ignition[1173]: Stage: disks Nov 23 22:57:02.888067 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:02.888095 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:02.888262 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:02.894992 ignition[1173]: PUT result: OK Nov 23 22:57:02.904914 ignition[1173]: disks: disks passed Nov 23 22:57:02.905055 ignition[1173]: Ignition finished successfully Nov 23 22:57:02.913203 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 22:57:02.921888 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 22:57:02.930576 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 22:57:02.934328 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:57:02.939106 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:57:02.949057 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:57:02.957558 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 22:57:03.015853 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 22:57:03.021374 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 22:57:03.029737 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 22:57:03.161068 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c70a3a7b-80c4-4387-ab29-1bf940859b86 r/w with ordered data mode. Quota mode: none. Nov 23 22:57:03.163658 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 22:57:03.168791 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 22:57:03.175957 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:57:03.182777 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 22:57:03.193730 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 22:57:03.195768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 22:57:03.196484 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:57:03.230199 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 22:57:03.237704 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 22:57:03.258043 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Nov 23 22:57:03.264082 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:03.264165 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:03.271891 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:57:03.272007 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:57:03.275145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:57:03.591427 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 22:57:03.611229 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Nov 23 22:57:03.620074 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 22:57:03.628779 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 22:57:03.649243 systemd-networkd[1150]: eth0: Gained IPv6LL Nov 23 22:57:03.917259 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 22:57:03.923493 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 22:57:03.931488 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 22:57:03.967496 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 22:57:03.972735 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:04.006958 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 22:57:04.029548 ignition[1314]: INFO : Ignition 2.22.0 Nov 23 22:57:04.032143 ignition[1314]: INFO : Stage: mount Nov 23 22:57:04.032143 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:04.032143 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:04.032143 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:04.044671 ignition[1314]: INFO : PUT result: OK Nov 23 22:57:04.055570 ignition[1314]: INFO : mount: mount passed Nov 23 22:57:04.059656 ignition[1314]: INFO : Ignition finished successfully Nov 23 22:57:04.064419 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 22:57:04.069430 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 22:57:04.166714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:57:04.209085 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Nov 23 22:57:04.215074 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:04.215150 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:04.222943 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:57:04.223045 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:57:04.227608 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:57:04.290602 ignition[1341]: INFO : Ignition 2.22.0 Nov 23 22:57:04.290602 ignition[1341]: INFO : Stage: files Nov 23 22:57:04.294998 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:04.294998 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:04.300850 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:04.305064 ignition[1341]: INFO : PUT result: OK Nov 23 22:57:04.310396 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Nov 23 22:57:04.324162 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 22:57:04.324162 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 22:57:04.346891 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 22:57:04.352396 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 22:57:04.356743 unknown[1341]: wrote ssh authorized keys file for user: core Nov 23 22:57:04.359756 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 22:57:04.365119 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:57:04.371515 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 22:57:04.443226 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 22:57:04.567505 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:57:04.567505 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 22:57:04.576996 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 22:57:04.576996 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:57:04.576996 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:57:04.576996 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:57:04.576996 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:57:04.576996 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:57:04.576996 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:57:04.611212 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:57:04.611212 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:57:04.611212 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:57:04.611212 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:57:04.611212 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:57:04.611212 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 23 22:57:05.046111 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 22:57:05.407634 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 22:57:05.414309 ignition[1341]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 22:57:05.419068 ignition[1341]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:57:05.427212 ignition[1341]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:57:05.427212 ignition[1341]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 22:57:05.427212 ignition[1341]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 23 22:57:05.427212 ignition[1341]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 22:57:05.427212 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:57:05.427212 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:57:05.427212 ignition[1341]: INFO : files: files passed Nov 23 22:57:05.427212 ignition[1341]: INFO : Ignition finished successfully Nov 23 22:57:05.429685 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 22:57:05.467694 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 22:57:05.473686 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 22:57:05.501535 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 22:57:05.506304 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 22:57:05.524095 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:57:05.530273 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:57:05.535202 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:57:05.542451 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:57:05.549385 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 22:57:05.556799 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 22:57:05.660166 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 22:57:05.660374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 22:57:05.664540 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 22:57:05.668228 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 22:57:05.683416 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 22:57:05.685674 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 22:57:05.742222 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:57:05.750179 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 22:57:05.792529 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:57:05.798469 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:57:05.804738 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 22:57:05.805129 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 22:57:05.805371 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:57:05.818995 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 22:57:05.821908 systemd[1]: Stopped target basic.target - Basic System. Nov 23 22:57:05.824682 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 22:57:05.832375 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:57:05.842306 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 22:57:05.846240 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:57:05.856503 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 22:57:05.862552 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:57:05.866865 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 22:57:05.876448 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 22:57:05.882414 systemd[1]: Stopped target swap.target - Swaps. Nov 23 22:57:05.886855 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 22:57:05.887191 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:57:05.896636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:57:05.903488 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:57:05.908281 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 22:57:05.910998 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:57:05.914658 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 22:57:05.914920 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 22:57:05.925217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 22:57:05.925649 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:57:05.935424 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 22:57:05.935736 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 22:57:05.945508 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 22:57:05.958386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 22:57:05.968094 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 22:57:05.971589 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:57:05.980333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 22:57:05.983588 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:57:06.005663 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 22:57:06.005908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 22:57:06.029252 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 22:57:06.050229 ignition[1396]: INFO : Ignition 2.22.0 Nov 23 22:57:06.053450 ignition[1396]: INFO : Stage: umount Nov 23 22:57:06.053450 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:06.053450 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:06.053450 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:06.066056 ignition[1396]: INFO : PUT result: OK Nov 23 22:57:06.074046 ignition[1396]: INFO : umount: umount passed Nov 23 22:57:06.076186 ignition[1396]: INFO : Ignition finished successfully Nov 23 22:57:06.083577 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 22:57:06.083986 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 22:57:06.093497 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 22:57:06.093750 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 22:57:06.103456 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 22:57:06.104079 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 22:57:06.112227 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 22:57:06.112336 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 22:57:06.115493 systemd[1]: Stopped target network.target - Network. Nov 23 22:57:06.123626 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 22:57:06.126247 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:57:06.129513 systemd[1]: Stopped target paths.target - Path Units. Nov 23 22:57:06.132087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 22:57:06.137103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:57:06.140007 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 22:57:06.140845 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 22:57:06.141757 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 22:57:06.141861 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:57:06.142461 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 22:57:06.142527 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:57:06.142833 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 22:57:06.142926 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 22:57:06.143640 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 22:57:06.143713 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 22:57:06.145333 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 22:57:06.147225 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 22:57:06.217890 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 22:57:06.218194 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 22:57:06.237935 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 22:57:06.240549 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 22:57:06.240783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 22:57:06.255638 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 22:57:06.256355 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 22:57:06.256541 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 22:57:06.265644 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 22:57:06.269525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 22:57:06.269617 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:57:06.272785 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 22:57:06.272911 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 22:57:06.277124 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 22:57:06.279288 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 22:57:06.279402 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:57:06.284283 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 22:57:06.286413 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:57:06.295360 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 22:57:06.295459 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 22:57:06.298445 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 22:57:06.298543 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:57:06.310476 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:57:06.338144 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 22:57:06.338275 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:57:06.373828 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 22:57:06.377925 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:57:06.382910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 22:57:06.383094 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 22:57:06.387622 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 22:57:06.387702 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:57:06.397733 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 22:57:06.397837 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:57:06.407240 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 22:57:06.407350 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 22:57:06.416206 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 22:57:06.416337 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:57:06.430327 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 22:57:06.442996 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 22:57:06.443156 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:57:06.456641 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 22:57:06.456751 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:57:06.466436 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 22:57:06.466551 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:57:06.477353 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 22:57:06.480073 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:57:06.484547 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:57:06.484650 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:57:06.501970 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 22:57:06.503347 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 23 22:57:06.503436 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 22:57:06.503521 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:57:06.504802 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 22:57:06.506430 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 22:57:06.514806 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 22:57:06.514989 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 22:57:06.520758 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 22:57:06.532873 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 22:57:06.590120 systemd[1]: Switching root. Nov 23 22:57:06.646215 systemd-journald[256]: Journal stopped Nov 23 22:57:09.344863 systemd-journald[256]: Received SIGTERM from PID 1 (systemd). Nov 23 22:57:09.344989 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 22:57:09.352269 kernel: SELinux: policy capability open_perms=1 Nov 23 22:57:09.352321 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 22:57:09.352362 kernel: SELinux: policy capability always_check_network=0 Nov 23 22:57:09.352401 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 22:57:09.352430 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 22:57:09.352460 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 22:57:09.352489 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 22:57:09.352519 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 22:57:09.352549 kernel: audit: type=1403 audit(1763938626.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 22:57:09.352591 systemd[1]: Successfully loaded SELinux policy in 112.342ms. Nov 23 22:57:09.352639 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.369ms. Nov 23 22:57:09.352678 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:57:09.352711 systemd[1]: Detected virtualization amazon. Nov 23 22:57:09.352744 systemd[1]: Detected architecture arm64. Nov 23 22:57:09.352775 systemd[1]: Detected first boot. Nov 23 22:57:09.352808 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:57:09.352839 zram_generator::config[1440]: No configuration found. Nov 23 22:57:09.352871 kernel: NET: Registered PF_VSOCK protocol family Nov 23 22:57:09.352899 systemd[1]: Populated /etc with preset unit settings. Nov 23 22:57:09.352933 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 22:57:09.352969 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 22:57:09.353002 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 22:57:09.353087 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 22:57:09.353126 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 22:57:09.353157 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 22:57:09.353192 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 22:57:09.353225 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 22:57:09.353259 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 22:57:09.353299 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 22:57:09.353333 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 22:57:09.353366 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 22:57:09.353399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:57:09.353433 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:57:09.353464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 22:57:09.353498 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 22:57:09.353530 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 22:57:09.353562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:57:09.353600 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 23 22:57:09.353632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:57:09.353664 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:57:09.353694 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 22:57:09.353727 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 22:57:09.353759 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 22:57:09.353791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 22:57:09.353833 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:57:09.353868 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:57:09.353896 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:57:09.353927 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:57:09.353955 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 22:57:09.353984 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 22:57:09.364068 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 22:57:09.364141 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:57:09.364191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:57:09.364221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:57:09.364259 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 22:57:09.364292 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 22:57:09.364323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 22:57:09.364354 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 22:57:09.364385 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 22:57:09.364414 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 22:57:09.364443 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 22:57:09.364475 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 22:57:09.364507 systemd[1]: Reached target machines.target - Containers. Nov 23 22:57:09.364541 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 22:57:09.364571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:09.364604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:57:09.364633 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 22:57:09.364661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:57:09.364693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:57:09.364726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:57:09.364756 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 22:57:09.364793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:57:09.364823 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 22:57:09.364852 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 22:57:09.364884 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 22:57:09.364913 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 22:57:09.364942 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 22:57:09.364972 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:09.365002 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:57:09.365066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:57:09.365105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:57:09.365137 kernel: loop: module loaded Nov 23 22:57:09.365168 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 22:57:09.365198 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 22:57:09.365230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:57:09.365268 kernel: fuse: init (API version 7.41) Nov 23 22:57:09.365297 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 22:57:09.365326 systemd[1]: Stopped verity-setup.service. Nov 23 22:57:09.365355 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 22:57:09.365384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 22:57:09.365416 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 22:57:09.365444 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 22:57:09.365474 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 22:57:09.365503 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 22:57:09.365532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:57:09.365561 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 22:57:09.365592 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 22:57:09.365625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:57:09.365656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:57:09.365691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:57:09.365722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:57:09.365750 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 22:57:09.365779 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 22:57:09.365807 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:57:09.365841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:57:09.365873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:57:09.365906 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 22:57:09.365937 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 22:57:09.365970 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 22:57:09.366000 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 22:57:09.373162 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:57:09.373207 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 22:57:09.373300 systemd-journald[1519]: Collecting audit messages is disabled. Nov 23 22:57:09.373356 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 22:57:09.373390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:09.373429 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 22:57:09.373460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:57:09.377116 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 22:57:09.377165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:57:09.377209 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:57:09.377243 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 22:57:09.377275 systemd-journald[1519]: Journal started Nov 23 22:57:09.377325 systemd-journald[1519]: Runtime Journal (/run/log/journal/ec231d93c8dfff87584d9c8718e46076) is 8M, max 75.3M, 67.3M free. Nov 23 22:57:08.542495 systemd[1]: Queued start job for default target multi-user.target. Nov 23 22:57:08.558414 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 23 22:57:08.559650 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 22:57:09.392660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:57:09.402081 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:57:09.404124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:57:09.409637 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 22:57:09.413731 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 22:57:09.420423 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 22:57:09.432669 kernel: ACPI: bus type drm_connector registered Nov 23 22:57:09.439493 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:57:09.441218 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:57:09.445381 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 22:57:09.490636 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:57:09.500745 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 22:57:09.527165 kernel: loop0: detected capacity change from 0 to 200800 Nov 23 22:57:09.530150 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 22:57:09.533999 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 22:57:09.543460 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 22:57:09.569248 systemd-journald[1519]: Time spent on flushing to /var/log/journal/ec231d93c8dfff87584d9c8718e46076 is 100.633ms for 932 entries. Nov 23 22:57:09.569248 systemd-journald[1519]: System Journal (/var/log/journal/ec231d93c8dfff87584d9c8718e46076) is 8M, max 195.6M, 187.6M free. Nov 23 22:57:09.691283 systemd-journald[1519]: Received client request to flush runtime journal. Nov 23 22:57:09.623778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 22:57:09.629457 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 22:57:09.649571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:57:09.691485 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 23 22:57:09.691509 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 23 22:57:09.701733 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 22:57:09.710721 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:57:09.722969 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 22:57:09.763658 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 22:57:09.799047 kernel: loop1: detected capacity change from 0 to 61264 Nov 23 22:57:09.822853 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:57:09.868932 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 22:57:09.880588 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:57:09.937067 kernel: loop2: detected capacity change from 0 to 100632 Nov 23 22:57:09.961690 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Nov 23 22:57:09.961761 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Nov 23 22:57:10.008863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:57:10.090094 kernel: loop3: detected capacity change from 0 to 119840 Nov 23 22:57:10.219073 kernel: loop4: detected capacity change from 0 to 200800 Nov 23 22:57:10.265059 kernel: loop5: detected capacity change from 0 to 61264 Nov 23 22:57:10.294059 kernel: loop6: detected capacity change from 0 to 100632 Nov 23 22:57:10.330081 kernel: loop7: detected capacity change from 0 to 119840 Nov 23 22:57:10.354985 (sd-merge)[1601]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 23 22:57:10.360586 (sd-merge)[1601]: Merged extensions into '/usr'. Nov 23 22:57:10.376501 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 22:57:10.376561 systemd[1]: Reloading... Nov 23 22:57:10.726056 zram_generator::config[1630]: No configuration found. Nov 23 22:57:11.203459 systemd[1]: Reloading finished in 823 ms. Nov 23 22:57:11.242103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 22:57:11.246377 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 22:57:11.263963 systemd[1]: Starting ensure-sysext.service... Nov 23 22:57:11.270308 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:57:11.280578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:57:11.335456 systemd[1]: Reload requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... Nov 23 22:57:11.335492 systemd[1]: Reloading... Nov 23 22:57:11.340139 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 22:57:11.340235 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 22:57:11.340870 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 22:57:11.341450 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 22:57:11.341696 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 22:57:11.346365 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 22:57:11.347430 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Nov 23 22:57:11.348390 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Nov 23 22:57:11.360748 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:57:11.361576 systemd-tmpfiles[1680]: Skipping /boot Nov 23 22:57:11.384151 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:57:11.384328 systemd-tmpfiles[1680]: Skipping /boot Nov 23 22:57:11.457049 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Nov 23 22:57:11.589306 zram_generator::config[1709]: No configuration found. Nov 23 22:57:11.996861 (udev-worker)[1768]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:57:12.249662 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 23 22:57:12.249895 systemd[1]: Reloading finished in 913 ms. Nov 23 22:57:12.277662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:57:12.283456 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 22:57:12.308575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:57:12.337659 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:57:12.345490 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 22:57:12.351582 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 22:57:12.360339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:57:12.368484 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:57:12.377148 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 22:57:12.391765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:12.396125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:57:12.405616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:57:12.434600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:57:12.438086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:12.438349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:12.447495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:12.447945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:12.448244 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:12.459935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:12.463520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:57:12.467163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:12.467430 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:12.467765 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 22:57:12.492544 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 22:57:12.498766 systemd[1]: Finished ensure-sysext.service. Nov 23 22:57:12.544726 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 22:57:12.634817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 22:57:12.648282 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 22:57:12.669922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:57:12.670393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:57:12.675098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:57:12.679577 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 22:57:12.685058 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 22:57:12.732628 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:57:12.733212 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:57:12.744752 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:57:12.747151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:57:12.760584 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 22:57:12.803869 augenrules[1915]: No rules Nov 23 22:57:12.908835 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:57:12.909534 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:57:12.913849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:57:12.914454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:57:12.931750 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:57:12.942259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:57:13.163985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 22:57:13.174472 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 22:57:13.186143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:57:13.245953 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 22:57:13.256927 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 22:57:13.438559 systemd-resolved[1853]: Positive Trust Anchors: Nov 23 22:57:13.438600 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:57:13.438664 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:57:13.450916 systemd-networkd[1849]: lo: Link UP Nov 23 22:57:13.451558 systemd-networkd[1849]: lo: Gained carrier Nov 23 22:57:13.452425 systemd-resolved[1853]: Defaulting to hostname 'linux'. Nov 23 22:57:13.455881 systemd-networkd[1849]: Enumeration completed Nov 23 22:57:13.456694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:57:13.460229 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:57:13.463352 systemd[1]: Reached target network.target - Network. Nov 23 22:57:13.465847 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:57:13.469307 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:57:13.472617 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 22:57:13.476275 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 22:57:13.479959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 22:57:13.483235 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 22:57:13.486790 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 22:57:13.490875 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 22:57:13.490967 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:57:13.493511 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:57:13.496901 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:13.496926 systemd-networkd[1849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:57:13.499581 systemd-networkd[1849]: eth0: Link UP Nov 23 22:57:13.499997 systemd-networkd[1849]: eth0: Gained carrier Nov 23 22:57:13.500112 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:13.501547 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 22:57:13.508237 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 22:57:13.518408 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 22:57:13.523083 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 22:57:13.527114 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 22:57:13.535210 systemd-networkd[1849]: eth0: DHCPv4 address 172.31.24.18/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 22:57:13.544745 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 22:57:13.548791 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 22:57:13.555397 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 22:57:13.561550 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 22:57:13.566490 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 22:57:13.571387 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:57:13.574474 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:57:13.577583 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:57:13.577648 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:57:13.583322 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 22:57:13.594403 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 22:57:13.605659 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 22:57:13.613486 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 22:57:13.621953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 22:57:13.633832 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 22:57:13.638237 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 22:57:13.644486 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 22:57:13.655548 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 22:57:13.667455 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 22:57:13.684516 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 23 22:57:13.700114 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 22:57:13.718844 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 22:57:13.741264 jq[1967]: false Nov 23 22:57:13.754075 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 22:57:13.762231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 22:57:13.763283 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 22:57:13.772329 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 22:57:13.786364 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 22:57:13.809175 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 22:57:13.826781 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 22:57:13.830146 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 22:57:13.856854 jq[1979]: true Nov 23 22:57:13.860838 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 22:57:13.865194 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 22:57:13.901968 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 22:57:13.904566 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 22:57:13.917763 (ntainerd)[1993]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 22:57:13.935181 extend-filesystems[1968]: Found /dev/nvme0n1p6 Nov 23 22:57:13.954074 extend-filesystems[1968]: Found /dev/nvme0n1p9 Nov 23 22:57:13.965062 jq[1994]: true Nov 23 22:57:13.996614 extend-filesystems[1968]: Checking size of /dev/nvme0n1p9 Nov 23 22:57:14.031228 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 22:57:14.038574 ntpd[1970]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: ---------------------------------------------------- Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: corporation. Support and training for ntp-4 are Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: available at https://www.nwtime.org/support Nov 23 22:57:14.047283 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: ---------------------------------------------------- Nov 23 22:57:14.038707 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:14.072130 coreos-metadata[1964]: Nov 23 22:57:14.050 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 22:57:14.072130 coreos-metadata[1964]: Nov 23 22:57:14.069 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 23 22:57:14.038726 ntpd[1970]: ---------------------------------------------------- Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: proto: precision = 0.096 usec (-23) Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: basedate set to 2025-11-11 Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: Listen normally on 3 eth0 172.31.24.18:123 Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: bind(21) AF_INET6 [fe80::4ba:aaff:fe2a:54db%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 22:57:14.072701 ntpd[1970]: 23 Nov 22:57:14 ntpd[1970]: unable to create socket on eth0 (5) for [fe80::4ba:aaff:fe2a:54db%2]:123 Nov 23 22:57:14.089278 tar[1987]: linux-arm64/LICENSE Nov 23 22:57:14.089278 tar[1987]: linux-arm64/helm Nov 23 22:57:14.038745 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:14.073553 systemd-coredump[2020]: Process 1970 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 23 22:57:14.130424 coreos-metadata[1964]: Nov 23 22:57:14.079 INFO Fetch successful Nov 23 22:57:14.130424 coreos-metadata[1964]: Nov 23 22:57:14.092 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 23 22:57:14.130424 coreos-metadata[1964]: Nov 23 22:57:14.096 INFO Fetch successful Nov 23 22:57:14.130424 coreos-metadata[1964]: Nov 23 22:57:14.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 23 22:57:14.130424 coreos-metadata[1964]: Nov 23 22:57:14.130 INFO Fetch successful Nov 23 22:57:14.130424 coreos-metadata[1964]: Nov 23 22:57:14.130 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 23 22:57:14.038763 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:14.080988 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 23 22:57:14.038780 ntpd[1970]: corporation. Support and training for ntp-4 are Nov 23 22:57:14.097249 systemd[1]: Started systemd-coredump@0-2020-0.service - Process Core Dump (PID 2020/UID 0). Nov 23 22:57:14.038797 ntpd[1970]: available at https://www.nwtime.org/support Nov 23 22:57:14.103927 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 22:57:14.038813 ntpd[1970]: ---------------------------------------------------- Nov 23 22:57:14.114508 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 22:57:14.051499 ntpd[1970]: proto: precision = 0.096 usec (-23) Nov 23 22:57:14.114574 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 22:57:14.056423 ntpd[1970]: basedate set to 2025-11-11 Nov 23 22:57:14.118541 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 22:57:14.056458 ntpd[1970]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:14.118579 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 22:57:14.056677 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:14.056734 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:14.057151 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:14.139306 update_engine[1978]: I20251123 22:57:14.136750 1978 main.cc:92] Flatcar Update Engine starting Nov 23 22:57:14.139777 coreos-metadata[1964]: Nov 23 22:57:14.133 INFO Fetch successful Nov 23 22:57:14.139777 coreos-metadata[1964]: Nov 23 22:57:14.133 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 23 22:57:14.139777 coreos-metadata[1964]: Nov 23 22:57:14.135 INFO Fetch failed with 404: resource not found Nov 23 22:57:14.139777 coreos-metadata[1964]: Nov 23 22:57:14.137 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 23 22:57:14.057205 ntpd[1970]: Listen normally on 3 eth0 172.31.24.18:123 Nov 23 22:57:14.057257 ntpd[1970]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:14.057308 ntpd[1970]: bind(21) AF_INET6 [fe80::4ba:aaff:fe2a:54db%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 22:57:14.057347 ntpd[1970]: unable to create socket on eth0 (5) for [fe80::4ba:aaff:fe2a:54db%2]:123 Nov 23 22:57:14.103555 dbus-daemon[1965]: [system] SELinux support is enabled Nov 23 22:57:14.152912 coreos-metadata[1964]: Nov 23 22:57:14.141 INFO Fetch successful Nov 23 22:57:14.152912 coreos-metadata[1964]: Nov 23 22:57:14.143 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 23 22:57:14.152912 coreos-metadata[1964]: Nov 23 22:57:14.150 INFO Fetch successful Nov 23 22:57:14.152912 coreos-metadata[1964]: Nov 23 22:57:14.150 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 23 22:57:14.153278 extend-filesystems[1968]: Resized partition /dev/nvme0n1p9 Nov 23 22:57:14.166378 coreos-metadata[1964]: Nov 23 22:57:14.157 INFO Fetch successful Nov 23 22:57:14.166378 coreos-metadata[1964]: Nov 23 22:57:14.162 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 23 22:57:14.166378 coreos-metadata[1964]: Nov 23 22:57:14.166 INFO Fetch successful Nov 23 22:57:14.170742 extend-filesystems[2024]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 22:57:14.177251 coreos-metadata[1964]: Nov 23 22:57:14.169 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 23 22:57:14.177251 coreos-metadata[1964]: Nov 23 22:57:14.175 INFO Fetch successful Nov 23 22:57:14.178107 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1849 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 23 22:57:14.196061 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 23 22:57:14.212732 update_engine[1978]: I20251123 22:57:14.201149 1978 update_check_scheduler.cc:74] Next update check in 3m18s Nov 23 22:57:14.203358 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 23 22:57:14.225422 systemd[1]: Started update-engine.service - Update Engine. Nov 23 22:57:14.241317 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 22:57:14.248181 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 23 22:57:14.364105 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 23 22:57:14.393382 extend-filesystems[2024]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 23 22:57:14.393382 extend-filesystems[2024]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 23 22:57:14.393382 extend-filesystems[2024]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 23 22:57:14.409966 extend-filesystems[1968]: Resized filesystem in /dev/nvme0n1p9 Nov 23 22:57:14.413407 bash[2045]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:57:14.416859 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 22:57:14.417451 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 22:57:14.434443 systemd-logind[1977]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 22:57:14.435568 systemd-logind[1977]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 23 22:57:14.435949 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 22:57:14.444360 systemd-logind[1977]: New seat seat0. Nov 23 22:57:14.456660 systemd[1]: Starting sshkeys.service... Nov 23 22:57:14.459001 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 22:57:14.463164 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 22:57:14.466913 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 22:57:14.614676 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 22:57:14.620227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 22:57:14.627294 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 22:57:15.044242 coreos-metadata[2060]: Nov 23 22:57:15.038 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 22:57:15.044242 coreos-metadata[2060]: Nov 23 22:57:15.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 23 22:57:15.044242 coreos-metadata[2060]: Nov 23 22:57:15.042 INFO Fetch successful Nov 23 22:57:15.044242 coreos-metadata[2060]: Nov 23 22:57:15.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 23 22:57:15.044242 coreos-metadata[2060]: Nov 23 22:57:15.043 INFO Fetch successful Nov 23 22:57:15.047729 unknown[2060]: wrote ssh authorized keys file for user: core Nov 23 22:57:15.125730 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 23 22:57:15.137393 update-ssh-keys[2111]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:57:15.144170 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 22:57:15.159141 systemd[1]: Finished sshkeys.service. Nov 23 22:57:15.180640 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 23 22:57:15.199051 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2029 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 23 22:57:15.208697 systemd[1]: Starting polkit.service - Authorization Manager... Nov 23 22:57:15.315399 systemd-coredump[2021]: Process 1970 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1970: #0 0x0000aaaac0ec0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaac0e6fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaac0e70240 n/a (ntpd + 0x10240) #3 0x0000aaaac0e6be14 n/a (ntpd + 0xbe14) #4 0x0000aaaac0e6d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaac0e75a38 n/a (ntpd + 0x15a38) #6 0x0000aaaac0e6738c n/a (ntpd + 0x738c) #7 0x0000ffff970a2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff970a2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaac0e673f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Nov 23 22:57:15.320554 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 23 22:57:15.320942 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 23 22:57:15.333953 systemd[1]: systemd-coredump@0-2020-0.service: Deactivated successfully. Nov 23 22:57:15.396169 containerd[1993]: time="2025-11-23T22:57:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 22:57:15.396169 containerd[1993]: time="2025-11-23T22:57:15.395897136Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 22:57:15.425485 systemd-networkd[1849]: eth0: Gained IPv6LL Nov 23 22:57:15.439771 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 23 22:57:15.441996 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 22:57:15.446871 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 22:57:15.454733 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 23 22:57:15.464291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:15.475364 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 22:57:15.483700 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.502551985Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.664µs" Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.502626901Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.502670749Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.503040049Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.503495533Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.503577325Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.503766913Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:57:15.505068 containerd[1993]: time="2025-11-23T22:57:15.503829685Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.507511009Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.507579157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.507613753Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.507636313Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.507932929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.510602773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.510701101Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:57:15.512910 containerd[1993]: time="2025-11-23T22:57:15.510735421Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 22:57:15.517056 containerd[1993]: time="2025-11-23T22:57:15.516643153Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 22:57:15.522275 containerd[1993]: time="2025-11-23T22:57:15.522170113Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 22:57:15.522497 containerd[1993]: time="2025-11-23T22:57:15.522433081Z" level=info msg="metadata content store policy set" policy=shared Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541186549Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541348321Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541406437Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541456225Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541491697Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541604641Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541661209Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541710241Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541763533Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541814377Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541853317Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.541903513Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.542357581Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 22:57:15.544098 containerd[1993]: time="2025-11-23T22:57:15.542418361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542457241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542485741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542516989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542545645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542578909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542606605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542635993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542665297Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 22:57:15.544854 containerd[1993]: time="2025-11-23T22:57:15.542695333Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 22:57:15.552128 containerd[1993]: time="2025-11-23T22:57:15.552046885Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 22:57:15.552301 containerd[1993]: time="2025-11-23T22:57:15.552136441Z" level=info msg="Start snapshots syncer" Nov 23 22:57:15.552301 containerd[1993]: time="2025-11-23T22:57:15.552195745Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 22:57:15.552800 containerd[1993]: time="2025-11-23T22:57:15.552700561Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 22:57:15.553598 containerd[1993]: time="2025-11-23T22:57:15.552831097Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 22:57:15.555351 containerd[1993]: time="2025-11-23T22:57:15.552987277Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 22:57:15.561867 containerd[1993]: time="2025-11-23T22:57:15.558953569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 22:57:15.561867 containerd[1993]: time="2025-11-23T22:57:15.560188849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 22:57:15.562207 containerd[1993]: time="2025-11-23T22:57:15.562096069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 22:57:15.562207 containerd[1993]: time="2025-11-23T22:57:15.562179061Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 22:57:15.562313 containerd[1993]: time="2025-11-23T22:57:15.562248001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.562279921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.564907237Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565054861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565095493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565158877Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565273909Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565446001Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565497529Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565532425Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.565556965Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.566626009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 22:57:15.567050 containerd[1993]: time="2025-11-23T22:57:15.566662297Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 22:57:15.567694 containerd[1993]: time="2025-11-23T22:57:15.566994781Z" level=info msg="runtime interface created" Nov 23 22:57:15.567694 containerd[1993]: time="2025-11-23T22:57:15.567327853Z" level=info msg="created NRI interface" Nov 23 22:57:15.576175 containerd[1993]: time="2025-11-23T22:57:15.567366241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 22:57:15.576175 containerd[1993]: time="2025-11-23T22:57:15.570148165Z" level=info msg="Connect containerd service" Nov 23 22:57:15.576175 containerd[1993]: time="2025-11-23T22:57:15.572451625Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 22:57:15.584945 containerd[1993]: time="2025-11-23T22:57:15.581781397Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:57:15.703739 ntpd[2168]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:15.703899 ntpd[2168]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:15.704441 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:15.704441 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:15.704441 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: ---------------------------------------------------- Nov 23 22:57:15.704441 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:15.704441 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:15.704441 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: corporation. Support and training for ntp-4 are Nov 23 22:57:15.704441 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: available at https://www.nwtime.org/support Nov 23 22:57:15.703920 ntpd[2168]: ---------------------------------------------------- Nov 23 22:57:15.703938 ntpd[2168]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:15.703955 ntpd[2168]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:15.703973 ntpd[2168]: corporation. Support and training for ntp-4 are Nov 23 22:57:15.703989 ntpd[2168]: available at https://www.nwtime.org/support Nov 23 22:57:15.704006 ntpd[2168]: ---------------------------------------------------- Nov 23 22:57:15.720049 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: ---------------------------------------------------- Nov 23 22:57:15.720726 ntpd[2168]: proto: precision = 0.096 usec (-23) Nov 23 22:57:15.721243 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: proto: precision = 0.096 usec (-23) Nov 23 22:57:15.723384 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 22:57:15.732184 ntpd[2168]: basedate set to 2025-11-11 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: basedate set to 2025-11-11 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Listen normally on 3 eth0 172.31.24.18:123 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Listen normally on 5 eth0 [fe80::4ba:aaff:fe2a:54db%2]:123 Nov 23 22:57:15.733084 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: Listening on routing socket on fd #22 for interface updates Nov 23 22:57:15.732254 ntpd[2168]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:15.732427 ntpd[2168]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:15.732477 ntpd[2168]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:15.732819 ntpd[2168]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:15.732877 ntpd[2168]: Listen normally on 3 eth0 172.31.24.18:123 Nov 23 22:57:15.732930 ntpd[2168]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:15.732977 ntpd[2168]: Listen normally on 5 eth0 [fe80::4ba:aaff:fe2a:54db%2]:123 Nov 23 22:57:15.733062 ntpd[2168]: Listening on routing socket on fd #22 for interface updates Nov 23 22:57:15.795582 ntpd[2168]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:15.799219 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:15.799219 ntpd[2168]: 23 Nov 22:57:15 ntpd[2168]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:15.795650 ntpd[2168]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:15.813840 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 22:57:15.966426 amazon-ssm-agent[2164]: Initializing new seelog logger Nov 23 22:57:15.966892 amazon-ssm-agent[2164]: New Seelog Logger Creation Complete Nov 23 22:57:15.966892 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.966892 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.967858 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 processing appconfig overrides Nov 23 22:57:15.982074 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.982074 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.982074 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 processing appconfig overrides Nov 23 22:57:15.982074 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.982074 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.982074 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 processing appconfig overrides Nov 23 22:57:15.982074 amazon-ssm-agent[2164]: 2025-11-23 22:57:15.9724 INFO Proxy environment variables: Nov 23 22:57:15.989769 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.989769 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:15.989769 amazon-ssm-agent[2164]: 2025/11/23 22:57:15 processing appconfig overrides Nov 23 22:57:16.079450 amazon-ssm-agent[2164]: 2025-11-23 22:57:15.9725 INFO https_proxy: Nov 23 22:57:16.083753 polkitd[2133]: Started polkitd version 126 Nov 23 22:57:16.139703 polkitd[2133]: Loading rules from directory /etc/polkit-1/rules.d Nov 23 22:57:16.140370 polkitd[2133]: Loading rules from directory /run/polkit-1/rules.d Nov 23 22:57:16.140451 polkitd[2133]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 22:57:16.144161 polkitd[2133]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 23 22:57:16.144270 polkitd[2133]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 22:57:16.145046 polkitd[2133]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 23 22:57:16.149578 polkitd[2133]: Finished loading, compiling and executing 2 rules Nov 23 22:57:16.151577 systemd[1]: Started polkit.service - Authorization Manager. Nov 23 22:57:16.160516 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 23 22:57:16.163760 polkitd[2133]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 23 22:57:16.179112 amazon-ssm-agent[2164]: 2025-11-23 22:57:15.9725 INFO http_proxy: Nov 23 22:57:16.232198 systemd-hostnamed[2029]: Hostname set to (transient) Nov 23 22:57:16.233110 systemd-resolved[1853]: System hostname changed to 'ip-172-31-24-18'. Nov 23 22:57:16.256242 containerd[1993]: time="2025-11-23T22:57:16.256155565Z" level=info msg="Start subscribing containerd event" Nov 23 22:57:16.256482 containerd[1993]: time="2025-11-23T22:57:16.256454305Z" level=info msg="Start recovering state" Nov 23 22:57:16.256721 containerd[1993]: time="2025-11-23T22:57:16.256691917Z" level=info msg="Start event monitor" Nov 23 22:57:16.257197 containerd[1993]: time="2025-11-23T22:57:16.257161669Z" level=info msg="Start cni network conf syncer for default" Nov 23 22:57:16.259514 containerd[1993]: time="2025-11-23T22:57:16.257329585Z" level=info msg="Start streaming server" Nov 23 22:57:16.259514 containerd[1993]: time="2025-11-23T22:57:16.257520973Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 22:57:16.259514 containerd[1993]: time="2025-11-23T22:57:16.257542969Z" level=info msg="runtime interface starting up..." Nov 23 22:57:16.259514 containerd[1993]: time="2025-11-23T22:57:16.257558329Z" level=info msg="starting plugins..." Nov 23 22:57:16.259514 containerd[1993]: time="2025-11-23T22:57:16.257607361Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 22:57:16.260790 containerd[1993]: time="2025-11-23T22:57:16.260700049Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 22:57:16.264238 containerd[1993]: time="2025-11-23T22:57:16.263317813Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 22:57:16.264238 containerd[1993]: time="2025-11-23T22:57:16.264175645Z" level=info msg="containerd successfully booted in 0.876467s" Nov 23 22:57:16.264335 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 22:57:16.280562 amazon-ssm-agent[2164]: 2025-11-23 22:57:15.9725 INFO no_proxy: Nov 23 22:57:16.380295 amazon-ssm-agent[2164]: 2025-11-23 22:57:15.9754 INFO Checking if agent identity type OnPrem can be assumed Nov 23 22:57:16.420085 tar[1987]: linux-arm64/README.md Nov 23 22:57:16.473852 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 22:57:16.480092 amazon-ssm-agent[2164]: 2025-11-23 22:57:15.9755 INFO Checking if agent identity type EC2 can be assumed Nov 23 22:57:16.579481 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.1982 INFO Agent will take identity from EC2 Nov 23 22:57:16.678470 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2093 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 23 22:57:16.685243 sshd_keygen[2013]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 22:57:16.696396 amazon-ssm-agent[2164]: 2025/11/23 22:57:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:16.696396 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:16.696396 amazon-ssm-agent[2164]: 2025/11/23 22:57:16 processing appconfig overrides Nov 23 22:57:16.735660 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2093 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2093 INFO [amazon-ssm-agent] Starting Core Agent Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2093 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2093 INFO [Registrar] Starting registrar module Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2127 INFO [EC2Identity] Checking disk for registration info Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2130 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.2131 INFO [EC2Identity] Generating registration keypair Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.6320 INFO [EC2Identity] Checking write access before registering Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.6344 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.6955 INFO [EC2Identity] EC2 registration was successful. Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.6955 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.6956 INFO [CredentialRefresher] credentialRefresher has started Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.6956 INFO [CredentialRefresher] Starting credentials refresher loop Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.7453 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 23 22:57:16.746056 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.7456 INFO [CredentialRefresher] Credentials ready Nov 23 22:57:16.748922 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 22:57:16.756530 systemd[1]: Started sshd@0-172.31.24.18:22-139.178.68.195:38202.service - OpenSSH per-connection server daemon (139.178.68.195:38202). Nov 23 22:57:16.777932 amazon-ssm-agent[2164]: 2025-11-23 22:57:16.7459 INFO [CredentialRefresher] Next credential rotation will be in 29.9999896003 minutes Nov 23 22:57:16.784906 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 22:57:16.786862 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 22:57:16.798057 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 22:57:16.852146 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 22:57:16.859669 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 22:57:16.869308 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 23 22:57:16.876257 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 22:57:17.006058 sshd[2234]: Accepted publickey for core from 139.178.68.195 port 38202 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:17.010111 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:17.042100 systemd-logind[1977]: New session 1 of user core. Nov 23 22:57:17.044055 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 22:57:17.049761 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 22:57:17.091250 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 22:57:17.104369 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 22:57:17.125057 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 22:57:17.131460 systemd-logind[1977]: New session c1 of user core. Nov 23 22:57:17.470001 systemd[2246]: Queued start job for default target default.target. Nov 23 22:57:17.480714 systemd[2246]: Created slice app.slice - User Application Slice. Nov 23 22:57:17.480797 systemd[2246]: Reached target paths.target - Paths. Nov 23 22:57:17.480899 systemd[2246]: Reached target timers.target - Timers. Nov 23 22:57:17.483633 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 22:57:17.534499 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 22:57:17.534799 systemd[2246]: Reached target sockets.target - Sockets. Nov 23 22:57:17.534905 systemd[2246]: Reached target basic.target - Basic System. Nov 23 22:57:17.534988 systemd[2246]: Reached target default.target - Main User Target. Nov 23 22:57:17.535083 systemd[2246]: Startup finished in 388ms. Nov 23 22:57:17.535508 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 22:57:17.549648 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 22:57:17.719320 systemd[1]: Started sshd@1-172.31.24.18:22-139.178.68.195:38218.service - OpenSSH per-connection server daemon (139.178.68.195:38218). Nov 23 22:57:17.791583 amazon-ssm-agent[2164]: 2025-11-23 22:57:17.7912 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 23 22:57:17.893325 amazon-ssm-agent[2164]: 2025-11-23 22:57:17.7967 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Nov 23 22:57:17.933396 sshd[2257]: Accepted publickey for core from 139.178.68.195 port 38218 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:17.937868 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:17.953139 systemd-logind[1977]: New session 2 of user core. Nov 23 22:57:17.961398 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 22:57:17.995063 amazon-ssm-agent[2164]: 2025-11-23 22:57:17.7968 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 23 22:57:18.104397 sshd[2267]: Connection closed by 139.178.68.195 port 38218 Nov 23 22:57:18.105696 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:18.114685 systemd[1]: sshd@1-172.31.24.18:22-139.178.68.195:38218.service: Deactivated successfully. Nov 23 22:57:18.118554 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 22:57:18.120133 systemd-logind[1977]: Session 2 logged out. Waiting for processes to exit. Nov 23 22:57:18.123516 systemd-logind[1977]: Removed session 2. Nov 23 22:57:18.145422 systemd[1]: Started sshd@2-172.31.24.18:22-139.178.68.195:38224.service - OpenSSH per-connection server daemon (139.178.68.195:38224). Nov 23 22:57:18.350102 sshd[2279]: Accepted publickey for core from 139.178.68.195 port 38224 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:18.353229 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:18.363652 systemd-logind[1977]: New session 3 of user core. Nov 23 22:57:18.374452 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 22:57:18.505317 sshd[2282]: Connection closed by 139.178.68.195 port 38224 Nov 23 22:57:18.506135 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:18.515180 systemd[1]: sshd@2-172.31.24.18:22-139.178.68.195:38224.service: Deactivated successfully. Nov 23 22:57:18.518550 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 22:57:18.520977 systemd-logind[1977]: Session 3 logged out. Waiting for processes to exit. Nov 23 22:57:18.523493 systemd-logind[1977]: Removed session 3. Nov 23 22:57:19.620336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:19.625227 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 22:57:19.631348 systemd[1]: Startup finished in 3.739s (kernel) + 9.245s (initrd) + 12.756s (userspace) = 25.741s. Nov 23 22:57:19.637280 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:57:20.371572 kubelet[2292]: E1123 22:57:20.371485 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:57:20.376528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:57:20.376973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:57:20.378570 systemd[1]: kubelet.service: Consumed 1.324s CPU time, 248.2M memory peak. Nov 23 22:57:23.136866 systemd-resolved[1853]: Clock change detected. Flushing caches. Nov 23 22:57:28.962854 systemd[1]: Started sshd@3-172.31.24.18:22-139.178.68.195:44838.service - OpenSSH per-connection server daemon (139.178.68.195:44838). Nov 23 22:57:29.156047 sshd[2304]: Accepted publickey for core from 139.178.68.195 port 44838 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:29.158324 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:29.165995 systemd-logind[1977]: New session 4 of user core. Nov 23 22:57:29.179821 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 22:57:29.303995 sshd[2307]: Connection closed by 139.178.68.195 port 44838 Nov 23 22:57:29.304806 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:29.311558 systemd[1]: sshd@3-172.31.24.18:22-139.178.68.195:44838.service: Deactivated successfully. Nov 23 22:57:29.316562 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 22:57:29.319396 systemd-logind[1977]: Session 4 logged out. Waiting for processes to exit. Nov 23 22:57:29.321891 systemd-logind[1977]: Removed session 4. Nov 23 22:57:29.340850 systemd[1]: Started sshd@4-172.31.24.18:22-139.178.68.195:44846.service - OpenSSH per-connection server daemon (139.178.68.195:44846). Nov 23 22:57:29.534957 sshd[2313]: Accepted publickey for core from 139.178.68.195 port 44846 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:29.537203 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:29.546571 systemd-logind[1977]: New session 5 of user core. Nov 23 22:57:29.553821 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 22:57:29.671558 sshd[2316]: Connection closed by 139.178.68.195 port 44846 Nov 23 22:57:29.670442 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:29.677739 systemd[1]: sshd@4-172.31.24.18:22-139.178.68.195:44846.service: Deactivated successfully. Nov 23 22:57:29.682053 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 22:57:29.684140 systemd-logind[1977]: Session 5 logged out. Waiting for processes to exit. Nov 23 22:57:29.687450 systemd-logind[1977]: Removed session 5. Nov 23 22:57:29.706906 systemd[1]: Started sshd@5-172.31.24.18:22-139.178.68.195:44860.service - OpenSSH per-connection server daemon (139.178.68.195:44860). Nov 23 22:57:29.899757 sshd[2322]: Accepted publickey for core from 139.178.68.195 port 44860 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:29.901948 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:29.910672 systemd-logind[1977]: New session 6 of user core. Nov 23 22:57:29.917824 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 22:57:30.043717 sshd[2325]: Connection closed by 139.178.68.195 port 44860 Nov 23 22:57:30.045216 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:30.051806 systemd[1]: sshd@5-172.31.24.18:22-139.178.68.195:44860.service: Deactivated successfully. Nov 23 22:57:30.055614 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 22:57:30.057553 systemd-logind[1977]: Session 6 logged out. Waiting for processes to exit. Nov 23 22:57:30.060706 systemd-logind[1977]: Removed session 6. Nov 23 22:57:30.077700 systemd[1]: Started sshd@6-172.31.24.18:22-139.178.68.195:44866.service - OpenSSH per-connection server daemon (139.178.68.195:44866). Nov 23 22:57:30.292815 sshd[2331]: Accepted publickey for core from 139.178.68.195 port 44866 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:30.295387 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:30.304445 systemd-logind[1977]: New session 7 of user core. Nov 23 22:57:30.314854 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 22:57:30.437573 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 22:57:30.438240 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:30.455956 sudo[2335]: pam_unix(sudo:session): session closed for user root Nov 23 22:57:30.480632 sshd[2334]: Connection closed by 139.178.68.195 port 44866 Nov 23 22:57:30.480420 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:30.487726 systemd[1]: sshd@6-172.31.24.18:22-139.178.68.195:44866.service: Deactivated successfully. Nov 23 22:57:30.491322 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 22:57:30.493449 systemd-logind[1977]: Session 7 logged out. Waiting for processes to exit. Nov 23 22:57:30.497102 systemd-logind[1977]: Removed session 7. Nov 23 22:57:30.514401 systemd[1]: Started sshd@7-172.31.24.18:22-139.178.68.195:35976.service - OpenSSH per-connection server daemon (139.178.68.195:35976). Nov 23 22:57:30.723172 sshd[2341]: Accepted publickey for core from 139.178.68.195 port 35976 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:30.726214 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:30.734005 systemd-logind[1977]: New session 8 of user core. Nov 23 22:57:30.746848 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 22:57:30.848650 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 22:57:30.849247 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:30.850848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 22:57:30.856646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:30.861526 sudo[2346]: pam_unix(sudo:session): session closed for user root Nov 23 22:57:30.871379 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 22:57:30.872474 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:30.895099 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:57:30.964903 augenrules[2371]: No rules Nov 23 22:57:30.969914 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:57:30.971478 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:57:30.973903 sudo[2345]: pam_unix(sudo:session): session closed for user root Nov 23 22:57:30.998413 sshd[2344]: Connection closed by 139.178.68.195 port 35976 Nov 23 22:57:30.998938 sshd-session[2341]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:31.008278 systemd[1]: sshd@7-172.31.24.18:22-139.178.68.195:35976.service: Deactivated successfully. Nov 23 22:57:31.013048 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 22:57:31.014907 systemd-logind[1977]: Session 8 logged out. Waiting for processes to exit. Nov 23 22:57:31.018875 systemd-logind[1977]: Removed session 8. Nov 23 22:57:31.038788 systemd[1]: Started sshd@8-172.31.24.18:22-139.178.68.195:35988.service - OpenSSH per-connection server daemon (139.178.68.195:35988). Nov 23 22:57:31.219133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:31.238027 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:57:31.308633 sshd[2380]: Accepted publickey for core from 139.178.68.195 port 35988 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:31.309641 sshd-session[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:31.319727 systemd-logind[1977]: New session 9 of user core. Nov 23 22:57:31.327989 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 22:57:31.393614 kubelet[2388]: E1123 22:57:31.393520 2388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:57:31.400505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:57:31.400994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:57:31.401935 systemd[1]: kubelet.service: Consumed 391ms CPU time, 107.1M memory peak. Nov 23 22:57:31.432352 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 22:57:31.433463 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:31.973661 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 22:57:31.997101 (dockerd)[2414]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 22:57:32.371902 dockerd[2414]: time="2025-11-23T22:57:32.371723237Z" level=info msg="Starting up" Nov 23 22:57:32.373616 dockerd[2414]: time="2025-11-23T22:57:32.373535729Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 22:57:32.393825 dockerd[2414]: time="2025-11-23T22:57:32.393738101Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 22:57:32.443707 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1922115808-merged.mount: Deactivated successfully. Nov 23 22:57:32.476496 dockerd[2414]: time="2025-11-23T22:57:32.476211426Z" level=info msg="Loading containers: start." Nov 23 22:57:32.491875 kernel: Initializing XFRM netlink socket Nov 23 22:57:32.832181 (udev-worker)[2435]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:57:32.905186 systemd-networkd[1849]: docker0: Link UP Nov 23 22:57:32.916965 dockerd[2414]: time="2025-11-23T22:57:32.916901504Z" level=info msg="Loading containers: done." Nov 23 22:57:32.965863 dockerd[2414]: time="2025-11-23T22:57:32.965790248Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 22:57:32.966077 dockerd[2414]: time="2025-11-23T22:57:32.965919608Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 22:57:32.966145 dockerd[2414]: time="2025-11-23T22:57:32.966072704Z" level=info msg="Initializing buildkit" Nov 23 22:57:33.019329 dockerd[2414]: time="2025-11-23T22:57:33.019272413Z" level=info msg="Completed buildkit initialization" Nov 23 22:57:33.035383 dockerd[2414]: time="2025-11-23T22:57:33.035323769Z" level=info msg="Daemon has completed initialization" Nov 23 22:57:33.035806 dockerd[2414]: time="2025-11-23T22:57:33.035609261Z" level=info msg="API listen on /run/docker.sock" Nov 23 22:57:33.036453 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 22:57:33.978659 containerd[1993]: time="2025-11-23T22:57:33.978494913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\"" Nov 23 22:57:34.601794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836585046.mount: Deactivated successfully. Nov 23 22:57:36.024739 containerd[1993]: time="2025-11-23T22:57:36.024661544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:36.026642 containerd[1993]: time="2025-11-23T22:57:36.026555492Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.2: active requests=0, bytes read=24563044" Nov 23 22:57:36.031944 containerd[1993]: time="2025-11-23T22:57:36.031866812Z" level=info msg="ImageCreate event name:\"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:36.040752 containerd[1993]: time="2025-11-23T22:57:36.040672952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:36.042607 containerd[1993]: time="2025-11-23T22:57:36.042493088Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.2\" with image id \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\", size \"24559643\" in 2.063211611s" Nov 23 22:57:36.042607 containerd[1993]: time="2025-11-23T22:57:36.042547316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\" returns image reference \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\"" Nov 23 22:57:36.043724 containerd[1993]: time="2025-11-23T22:57:36.043663712Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\"" Nov 23 22:57:37.352059 containerd[1993]: time="2025-11-23T22:57:37.351973462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:37.354543 containerd[1993]: time="2025-11-23T22:57:37.354489550Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.2: active requests=0, bytes read=19134212" Nov 23 22:57:37.356755 containerd[1993]: time="2025-11-23T22:57:37.356693326Z" level=info msg="ImageCreate event name:\"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:37.364835 containerd[1993]: time="2025-11-23T22:57:37.363863566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:37.365952 containerd[1993]: time="2025-11-23T22:57:37.365885206Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.2\" with image id \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\", size \"20718696\" in 1.322159718s" Nov 23 22:57:37.365952 containerd[1993]: time="2025-11-23T22:57:37.365948074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\" returns image reference \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\"" Nov 23 22:57:37.367670 containerd[1993]: time="2025-11-23T22:57:37.367629154Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\"" Nov 23 22:57:38.413516 containerd[1993]: time="2025-11-23T22:57:38.413437187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:38.417107 containerd[1993]: time="2025-11-23T22:57:38.417033371Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.2: active requests=0, bytes read=14191283" Nov 23 22:57:38.419941 containerd[1993]: time="2025-11-23T22:57:38.419879363Z" level=info msg="ImageCreate event name:\"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:38.427452 containerd[1993]: time="2025-11-23T22:57:38.427342295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:38.429443 containerd[1993]: time="2025-11-23T22:57:38.429232463Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.2\" with image id \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\", size \"15775785\" in 1.061407541s" Nov 23 22:57:38.429443 containerd[1993]: time="2025-11-23T22:57:38.429291395Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\" returns image reference \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\"" Nov 23 22:57:38.430149 containerd[1993]: time="2025-11-23T22:57:38.430018607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\"" Nov 23 22:57:39.703405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241342424.mount: Deactivated successfully. Nov 23 22:57:40.152503 containerd[1993]: time="2025-11-23T22:57:40.151966872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:40.154617 containerd[1993]: time="2025-11-23T22:57:40.154539012Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.2: active requests=0, bytes read=22803241" Nov 23 22:57:40.156018 containerd[1993]: time="2025-11-23T22:57:40.155957208Z" level=info msg="ImageCreate event name:\"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:40.161809 containerd[1993]: time="2025-11-23T22:57:40.161728560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:40.162620 containerd[1993]: time="2025-11-23T22:57:40.162401568Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.2\" with image id \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\", repo tag \"registry.k8s.io/kube-proxy:v1.34.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\", size \"22802260\" in 1.732303917s" Nov 23 22:57:40.162620 containerd[1993]: time="2025-11-23T22:57:40.162456660Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\" returns image reference \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\"" Nov 23 22:57:40.163472 containerd[1993]: time="2025-11-23T22:57:40.163057548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 23 22:57:40.703792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518070255.mount: Deactivated successfully. Nov 23 22:57:41.651353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 22:57:41.655945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:42.044846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:42.057654 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:57:42.172376 containerd[1993]: time="2025-11-23T22:57:42.171087002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:42.176233 containerd[1993]: time="2025-11-23T22:57:42.176156030Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Nov 23 22:57:42.182546 containerd[1993]: time="2025-11-23T22:57:42.182167778Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:42.195308 containerd[1993]: time="2025-11-23T22:57:42.195235622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:42.200151 containerd[1993]: time="2025-11-23T22:57:42.200082926Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.03697565s" Nov 23 22:57:42.200151 containerd[1993]: time="2025-11-23T22:57:42.200148110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 23 22:57:42.201520 containerd[1993]: time="2025-11-23T22:57:42.200881646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 23 22:57:42.201606 kubelet[2761]: E1123 22:57:42.201253 2761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:57:42.206317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:57:42.206840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:57:42.207906 systemd[1]: kubelet.service: Consumed 322ms CPU time, 107.4M memory peak. Nov 23 22:57:42.808336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286329686.mount: Deactivated successfully. Nov 23 22:57:42.822619 containerd[1993]: time="2025-11-23T22:57:42.821773925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:42.824810 containerd[1993]: time="2025-11-23T22:57:42.824771273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Nov 23 22:57:42.827360 containerd[1993]: time="2025-11-23T22:57:42.827324045Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:42.832428 containerd[1993]: time="2025-11-23T22:57:42.832381097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:42.834013 containerd[1993]: time="2025-11-23T22:57:42.833925677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 632.999547ms" Nov 23 22:57:42.834106 containerd[1993]: time="2025-11-23T22:57:42.834021497Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 23 22:57:42.835209 containerd[1993]: time="2025-11-23T22:57:42.835116521Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 23 22:57:43.398276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390703090.mount: Deactivated successfully. Nov 23 22:57:46.188444 containerd[1993]: time="2025-11-23T22:57:46.188361870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:46.191185 containerd[1993]: time="2025-11-23T22:57:46.191116926Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Nov 23 22:57:46.193984 containerd[1993]: time="2025-11-23T22:57:46.193915662Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:46.199394 containerd[1993]: time="2025-11-23T22:57:46.199318182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:46.201512 containerd[1993]: time="2025-11-23T22:57:46.201336510Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.366127469s" Nov 23 22:57:46.201512 containerd[1993]: time="2025-11-23T22:57:46.201387786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 23 22:57:46.686088 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 22:57:52.285066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 22:57:52.290921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:52.629828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:52.642478 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:57:52.724119 kubelet[2856]: E1123 22:57:52.724060 2856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:57:52.728964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:57:52.729422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:57:52.731712 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.3M memory peak. Nov 23 22:57:53.990396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:53.990782 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.3M memory peak. Nov 23 22:57:53.994461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:54.053862 systemd[1]: Reload requested from client PID 2870 ('systemctl') (unit session-9.scope)... Nov 23 22:57:54.053895 systemd[1]: Reloading... Nov 23 22:57:54.300621 zram_generator::config[2917]: No configuration found. Nov 23 22:57:54.760140 systemd[1]: Reloading finished in 705 ms. Nov 23 22:57:54.879232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:54.887290 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:57:54.887776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:54.887845 systemd[1]: kubelet.service: Consumed 231ms CPU time, 95M memory peak. Nov 23 22:57:54.891631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:55.216005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:55.232126 (kubelet)[2979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:57:55.308124 kubelet[2979]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:57:55.308871 kubelet[2979]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:57:55.310662 kubelet[2979]: I1123 22:57:55.309856 2979 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:57:55.982383 kubelet[2979]: I1123 22:57:55.982335 2979 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 22:57:55.982579 kubelet[2979]: I1123 22:57:55.982561 2979 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:57:55.985173 kubelet[2979]: I1123 22:57:55.985146 2979 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 22:57:55.985314 kubelet[2979]: I1123 22:57:55.985294 2979 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:57:55.985901 kubelet[2979]: I1123 22:57:55.985879 2979 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:57:56.000479 kubelet[2979]: E1123 22:57:56.000430 2979 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 22:57:56.004387 kubelet[2979]: I1123 22:57:56.004340 2979 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:57:56.011404 kubelet[2979]: I1123 22:57:56.011355 2979 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:57:56.017056 kubelet[2979]: I1123 22:57:56.017005 2979 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 22:57:56.017433 kubelet[2979]: I1123 22:57:56.017381 2979 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:57:56.017721 kubelet[2979]: I1123 22:57:56.017435 2979 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:57:56.017915 kubelet[2979]: I1123 22:57:56.017721 2979 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:57:56.017915 kubelet[2979]: I1123 22:57:56.017741 2979 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 22:57:56.018094 kubelet[2979]: I1123 22:57:56.017915 2979 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 22:57:56.025985 kubelet[2979]: I1123 22:57:56.025944 2979 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:57:56.029152 kubelet[2979]: E1123 22:57:56.029083 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-18&limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 22:57:56.029302 kubelet[2979]: I1123 22:57:56.028276 2979 kubelet.go:475] "Attempting to sync node with API server" Nov 23 22:57:56.029302 kubelet[2979]: I1123 22:57:56.029225 2979 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:57:56.029302 kubelet[2979]: I1123 22:57:56.029272 2979 kubelet.go:387] "Adding apiserver pod source" Nov 23 22:57:56.029302 kubelet[2979]: I1123 22:57:56.029293 2979 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:57:56.033696 kubelet[2979]: I1123 22:57:56.031730 2979 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:57:56.033696 kubelet[2979]: I1123 22:57:56.032928 2979 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:57:56.033696 kubelet[2979]: I1123 22:57:56.032982 2979 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 22:57:56.033696 kubelet[2979]: W1123 22:57:56.033039 2979 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 22:57:56.038497 kubelet[2979]: I1123 22:57:56.038453 2979 server.go:1262] "Started kubelet" Nov 23 22:57:56.038843 kubelet[2979]: E1123 22:57:56.038796 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 22:57:56.042109 kubelet[2979]: I1123 22:57:56.042059 2979 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:57:56.043797 kubelet[2979]: I1123 22:57:56.043765 2979 server.go:310] "Adding debug handlers to kubelet server" Nov 23 22:57:56.044774 kubelet[2979]: I1123 22:57:56.044634 2979 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:57:56.044911 kubelet[2979]: I1123 22:57:56.044789 2979 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 22:57:56.046643 kubelet[2979]: I1123 22:57:56.045826 2979 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:57:56.048626 kubelet[2979]: E1123 22:57:56.046075 2979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.18:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-18.187ac4e14ed7894f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-18,UID:ip-172-31-24-18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-18,},FirstTimestamp:2025-11-23 22:57:56.038404431 +0000 UTC m=+0.800148329,LastTimestamp:2025-11-23 22:57:56.038404431 +0000 UTC m=+0.800148329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-18,}" Nov 23 22:57:56.051155 kubelet[2979]: I1123 22:57:56.051101 2979 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:57:56.053877 kubelet[2979]: I1123 22:57:56.053238 2979 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:57:56.060625 kubelet[2979]: I1123 22:57:56.060375 2979 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 22:57:56.061570 kubelet[2979]: E1123 22:57:56.061204 2979 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-18\" not found" Nov 23 22:57:56.061764 kubelet[2979]: I1123 22:57:56.061711 2979 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 22:57:56.061818 kubelet[2979]: I1123 22:57:56.061789 2979 reconciler.go:29] "Reconciler: start to sync state" Nov 23 22:57:56.062379 kubelet[2979]: E1123 22:57:56.062336 2979 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:57:56.062604 kubelet[2979]: E1123 22:57:56.062535 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-18?timeout=10s\": dial tcp 172.31.24.18:6443: connect: connection refused" interval="200ms" Nov 23 22:57:56.062837 kubelet[2979]: E1123 22:57:56.062794 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 22:57:56.064088 kubelet[2979]: I1123 22:57:56.064046 2979 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:57:56.064257 kubelet[2979]: I1123 22:57:56.064231 2979 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:57:56.070626 kubelet[2979]: I1123 22:57:56.070021 2979 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:57:56.108203 kubelet[2979]: I1123 22:57:56.107799 2979 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:57:56.108203 kubelet[2979]: I1123 22:57:56.107829 2979 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:57:56.108203 kubelet[2979]: I1123 22:57:56.107858 2979 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:57:56.110115 kubelet[2979]: I1123 22:57:56.110038 2979 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 22:57:56.112136 kubelet[2979]: I1123 22:57:56.112073 2979 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 22:57:56.112136 kubelet[2979]: I1123 22:57:56.112125 2979 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 22:57:56.112541 kubelet[2979]: I1123 22:57:56.112198 2979 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 22:57:56.112541 kubelet[2979]: E1123 22:57:56.112271 2979 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:57:56.114040 kubelet[2979]: I1123 22:57:56.113613 2979 policy_none.go:49] "None policy: Start" Nov 23 22:57:56.114040 kubelet[2979]: I1123 22:57:56.113656 2979 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 22:57:56.114040 kubelet[2979]: I1123 22:57:56.113680 2979 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 22:57:56.118650 kubelet[2979]: I1123 22:57:56.118614 2979 policy_none.go:47] "Start" Nov 23 22:57:56.122694 kubelet[2979]: E1123 22:57:56.122074 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 22:57:56.131534 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 22:57:56.149717 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 22:57:56.157071 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 22:57:56.161523 kubelet[2979]: E1123 22:57:56.161483 2979 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-18\" not found" Nov 23 22:57:56.170630 kubelet[2979]: E1123 22:57:56.169871 2979 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:57:56.170630 kubelet[2979]: I1123 22:57:56.170156 2979 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:57:56.170630 kubelet[2979]: I1123 22:57:56.170174 2979 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:57:56.172682 kubelet[2979]: I1123 22:57:56.172654 2979 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:57:56.174356 kubelet[2979]: E1123 22:57:56.174276 2979 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:57:56.174579 kubelet[2979]: E1123 22:57:56.174554 2979 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-18\" not found" Nov 23 22:57:56.234952 systemd[1]: Created slice kubepods-burstable-poded2a312fa77b597b0b568bb88e3768d5.slice - libcontainer container kubepods-burstable-poded2a312fa77b597b0b568bb88e3768d5.slice. Nov 23 22:57:56.249814 kubelet[2979]: E1123 22:57:56.249734 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:56.257323 systemd[1]: Created slice kubepods-burstable-pod52feb0dbad94177e727c2158a466c939.slice - libcontainer container kubepods-burstable-pod52feb0dbad94177e727c2158a466c939.slice. Nov 23 22:57:56.262762 kubelet[2979]: E1123 22:57:56.262728 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:56.263569 kubelet[2979]: I1123 22:57:56.263501 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:57:56.263569 kubelet[2979]: I1123 22:57:56.263563 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:57:56.263772 kubelet[2979]: I1123 22:57:56.263677 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed2a312fa77b597b0b568bb88e3768d5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-18\" (UID: \"ed2a312fa77b597b0b568bb88e3768d5\") " pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:57:56.263772 kubelet[2979]: I1123 22:57:56.263720 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed2a312fa77b597b0b568bb88e3768d5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-18\" (UID: \"ed2a312fa77b597b0b568bb88e3768d5\") " pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:57:56.263772 kubelet[2979]: I1123 22:57:56.263761 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:57:56.263969 kubelet[2979]: I1123 22:57:56.263795 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:57:56.263969 kubelet[2979]: I1123 22:57:56.263831 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:57:56.263969 kubelet[2979]: I1123 22:57:56.263869 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b83410c5340756f0ea9514cc24a9f27-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-18\" (UID: \"7b83410c5340756f0ea9514cc24a9f27\") " pod="kube-system/kube-scheduler-ip-172-31-24-18" Nov 23 22:57:56.263969 kubelet[2979]: I1123 22:57:56.263907 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed2a312fa77b597b0b568bb88e3768d5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-18\" (UID: \"ed2a312fa77b597b0b568bb88e3768d5\") " pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:57:56.270777 kubelet[2979]: E1123 22:57:56.268217 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-18?timeout=10s\": dial tcp 172.31.24.18:6443: connect: connection refused" interval="400ms" Nov 23 22:57:56.271468 systemd[1]: Created slice kubepods-burstable-pod7b83410c5340756f0ea9514cc24a9f27.slice - libcontainer container kubepods-burstable-pod7b83410c5340756f0ea9514cc24a9f27.slice. Nov 23 22:57:56.277073 kubelet[2979]: I1123 22:57:56.277019 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-18" Nov 23 22:57:56.280613 kubelet[2979]: E1123 22:57:56.278220 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:56.280613 kubelet[2979]: E1123 22:57:56.278927 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.18:6443/api/v1/nodes\": dial tcp 172.31.24.18:6443: connect: connection refused" node="ip-172-31-24-18" Nov 23 22:57:56.482149 kubelet[2979]: I1123 22:57:56.482067 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-18" Nov 23 22:57:56.482999 kubelet[2979]: E1123 22:57:56.482948 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.18:6443/api/v1/nodes\": dial tcp 172.31.24.18:6443: connect: connection refused" node="ip-172-31-24-18" Nov 23 22:57:56.618320 containerd[1993]: time="2025-11-23T22:57:56.617970198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-18,Uid:ed2a312fa77b597b0b568bb88e3768d5,Namespace:kube-system,Attempt:0,}" Nov 23 22:57:56.672130 kubelet[2979]: E1123 22:57:56.672072 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-18?timeout=10s\": dial tcp 172.31.24.18:6443: connect: connection refused" interval="800ms" Nov 23 22:57:56.682691 containerd[1993]: time="2025-11-23T22:57:56.682495386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-18,Uid:52feb0dbad94177e727c2158a466c939,Namespace:kube-system,Attempt:0,}" Nov 23 22:57:56.730426 containerd[1993]: time="2025-11-23T22:57:56.729984186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-18,Uid:7b83410c5340756f0ea9514cc24a9f27,Namespace:kube-system,Attempt:0,}" Nov 23 22:57:56.885763 kubelet[2979]: I1123 22:57:56.885623 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-18" Nov 23 22:57:56.886530 kubelet[2979]: E1123 22:57:56.886283 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.18:6443/api/v1/nodes\": dial tcp 172.31.24.18:6443: connect: connection refused" node="ip-172-31-24-18" Nov 23 22:57:56.900560 kubelet[2979]: E1123 22:57:56.900403 2979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.18:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-18.187ac4e14ed7894f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-18,UID:ip-172-31-24-18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-18,},FirstTimestamp:2025-11-23 22:57:56.038404431 +0000 UTC m=+0.800148329,LastTimestamp:2025-11-23 22:57:56.038404431 +0000 UTC m=+0.800148329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-18,}" Nov 23 22:57:57.079566 kubelet[2979]: E1123 22:57:57.079491 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-18&limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 22:57:57.187389 kubelet[2979]: E1123 22:57:57.187043 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 22:57:57.233689 kubelet[2979]: E1123 22:57:57.233564 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 22:57:57.329895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603515068.mount: Deactivated successfully. Nov 23 22:57:57.347628 containerd[1993]: time="2025-11-23T22:57:57.346974941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:57.355504 containerd[1993]: time="2025-11-23T22:57:57.355434605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 23 22:57:57.357647 containerd[1993]: time="2025-11-23T22:57:57.357347657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:57.360347 containerd[1993]: time="2025-11-23T22:57:57.360030917Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:57.363811 containerd[1993]: time="2025-11-23T22:57:57.363741558Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:57.365987 containerd[1993]: time="2025-11-23T22:57:57.365925870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 22:57:57.368047 containerd[1993]: time="2025-11-23T22:57:57.367673538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 22:57:57.370611 containerd[1993]: time="2025-11-23T22:57:57.370527294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:57.372008 containerd[1993]: time="2025-11-23T22:57:57.371964774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 690.704344ms" Nov 23 22:57:57.376032 containerd[1993]: time="2025-11-23T22:57:57.375959646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 594.206547ms" Nov 23 22:57:57.390949 containerd[1993]: time="2025-11-23T22:57:57.390869178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 563.822163ms" Nov 23 22:57:57.414609 containerd[1993]: time="2025-11-23T22:57:57.413763318Z" level=info msg="connecting to shim 981589442b1612317660aa8f2d0909e81e6721db26169dd0b4c4ca3498c757df" address="unix:///run/containerd/s/1b39ead562c83855fdbdea4cdb6232ed70a9ffc072fdb7397cf8c283c883a189" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:57:57.436608 kubelet[2979]: E1123 22:57:57.435209 2979 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 22:57:57.473199 kubelet[2979]: E1123 22:57:57.473038 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-18?timeout=10s\": dial tcp 172.31.24.18:6443: connect: connection refused" interval="1.6s" Nov 23 22:57:57.475308 systemd[1]: Started cri-containerd-981589442b1612317660aa8f2d0909e81e6721db26169dd0b4c4ca3498c757df.scope - libcontainer container 981589442b1612317660aa8f2d0909e81e6721db26169dd0b4c4ca3498c757df. Nov 23 22:57:57.483775 containerd[1993]: time="2025-11-23T22:57:57.483690402Z" level=info msg="connecting to shim cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907" address="unix:///run/containerd/s/93250cfb3b06685c57339fc4a7bec5c4e9c4415c04b506dd50b066d260095a96" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:57:57.488750 containerd[1993]: time="2025-11-23T22:57:57.488657106Z" level=info msg="connecting to shim dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7" address="unix:///run/containerd/s/a505a269beec9df41ab994e475b7fcc45c6f4ea063af27fdfec761bd3959b464" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:57:57.542356 systemd[1]: Started cri-containerd-cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907.scope - libcontainer container cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907. Nov 23 22:57:57.564252 systemd[1]: Started cri-containerd-dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7.scope - libcontainer container dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7. Nov 23 22:57:57.635023 containerd[1993]: time="2025-11-23T22:57:57.634961431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-18,Uid:ed2a312fa77b597b0b568bb88e3768d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"981589442b1612317660aa8f2d0909e81e6721db26169dd0b4c4ca3498c757df\"" Nov 23 22:57:57.655614 containerd[1993]: time="2025-11-23T22:57:57.654721735Z" level=info msg="CreateContainer within sandbox \"981589442b1612317660aa8f2d0909e81e6721db26169dd0b4c4ca3498c757df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 22:57:57.690322 kubelet[2979]: I1123 22:57:57.690268 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-18" Nov 23 22:57:57.691335 kubelet[2979]: E1123 22:57:57.690840 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.18:6443/api/v1/nodes\": dial tcp 172.31.24.18:6443: connect: connection refused" node="ip-172-31-24-18" Nov 23 22:57:57.703936 containerd[1993]: time="2025-11-23T22:57:57.703792471Z" level=info msg="Container 0fe2cfaaecb091081eebad40e11e858cdd712032be948879c423b28b9acf7892: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:57:57.709060 containerd[1993]: time="2025-11-23T22:57:57.708878551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-18,Uid:52feb0dbad94177e727c2158a466c939,Namespace:kube-system,Attempt:0,} returns sandbox id \"cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907\"" Nov 23 22:57:57.721221 containerd[1993]: time="2025-11-23T22:57:57.721119067Z" level=info msg="CreateContainer within sandbox \"cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 22:57:57.727379 containerd[1993]: time="2025-11-23T22:57:57.726656011Z" level=info msg="CreateContainer within sandbox \"981589442b1612317660aa8f2d0909e81e6721db26169dd0b4c4ca3498c757df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0fe2cfaaecb091081eebad40e11e858cdd712032be948879c423b28b9acf7892\"" Nov 23 22:57:57.728523 containerd[1993]: time="2025-11-23T22:57:57.728452231Z" level=info msg="StartContainer for \"0fe2cfaaecb091081eebad40e11e858cdd712032be948879c423b28b9acf7892\"" Nov 23 22:57:57.731972 containerd[1993]: time="2025-11-23T22:57:57.731819899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-18,Uid:7b83410c5340756f0ea9514cc24a9f27,Namespace:kube-system,Attempt:0,} returns sandbox id \"dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7\"" Nov 23 22:57:57.733405 containerd[1993]: time="2025-11-23T22:57:57.733269823Z" level=info msg="connecting to shim 0fe2cfaaecb091081eebad40e11e858cdd712032be948879c423b28b9acf7892" address="unix:///run/containerd/s/1b39ead562c83855fdbdea4cdb6232ed70a9ffc072fdb7397cf8c283c883a189" protocol=ttrpc version=3 Nov 23 22:57:57.744453 containerd[1993]: time="2025-11-23T22:57:57.744265459Z" level=info msg="CreateContainer within sandbox \"dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 22:57:57.757778 containerd[1993]: time="2025-11-23T22:57:57.757713871Z" level=info msg="Container 8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:57:57.768998 containerd[1993]: time="2025-11-23T22:57:57.768926948Z" level=info msg="Container 6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:57:57.781294 systemd[1]: Started cri-containerd-0fe2cfaaecb091081eebad40e11e858cdd712032be948879c423b28b9acf7892.scope - libcontainer container 0fe2cfaaecb091081eebad40e11e858cdd712032be948879c423b28b9acf7892. Nov 23 22:57:57.782695 containerd[1993]: time="2025-11-23T22:57:57.782569256Z" level=info msg="CreateContainer within sandbox \"cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76\"" Nov 23 22:57:57.785754 containerd[1993]: time="2025-11-23T22:57:57.785710052Z" level=info msg="StartContainer for \"8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76\"" Nov 23 22:57:57.788827 containerd[1993]: time="2025-11-23T22:57:57.788741252Z" level=info msg="connecting to shim 8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76" address="unix:///run/containerd/s/93250cfb3b06685c57339fc4a7bec5c4e9c4415c04b506dd50b066d260095a96" protocol=ttrpc version=3 Nov 23 22:57:57.791178 containerd[1993]: time="2025-11-23T22:57:57.791083712Z" level=info msg="CreateContainer within sandbox \"dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed\"" Nov 23 22:57:57.793910 containerd[1993]: time="2025-11-23T22:57:57.793840688Z" level=info msg="StartContainer for \"6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed\"" Nov 23 22:57:57.801987 containerd[1993]: time="2025-11-23T22:57:57.801819152Z" level=info msg="connecting to shim 6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed" address="unix:///run/containerd/s/a505a269beec9df41ab994e475b7fcc45c6f4ea063af27fdfec761bd3959b464" protocol=ttrpc version=3 Nov 23 22:57:57.855406 systemd[1]: Started cri-containerd-8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76.scope - libcontainer container 8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76. Nov 23 22:57:57.868899 systemd[1]: Started cri-containerd-6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed.scope - libcontainer container 6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed. Nov 23 22:57:57.939620 containerd[1993]: time="2025-11-23T22:57:57.939523520Z" level=info msg="StartContainer for \"0fe2cfaaecb091081eebad40e11e858cdd712032be948879c423b28b9acf7892\" returns successfully" Nov 23 22:57:57.995366 containerd[1993]: time="2025-11-23T22:57:57.994880925Z" level=info msg="StartContainer for \"8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76\" returns successfully" Nov 23 22:57:58.102963 containerd[1993]: time="2025-11-23T22:57:58.102890429Z" level=info msg="StartContainer for \"6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed\" returns successfully" Nov 23 22:57:58.150608 kubelet[2979]: E1123 22:57:58.150388 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:58.161686 kubelet[2979]: E1123 22:57:58.160419 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:58.175237 kubelet[2979]: E1123 22:57:58.175202 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:59.177419 kubelet[2979]: E1123 22:57:59.177183 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:59.178232 kubelet[2979]: E1123 22:57:59.176578 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:57:59.293554 kubelet[2979]: I1123 22:57:59.293299 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-18" Nov 23 22:58:00.308715 update_engine[1978]: I20251123 22:58:00.308627 1978 update_attempter.cc:509] Updating boot flags... Nov 23 22:58:00.451492 kubelet[2979]: E1123 22:58:00.451210 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:58:01.012980 kubelet[2979]: E1123 22:58:01.012693 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:58:01.231615 kubelet[2979]: E1123 22:58:01.231536 2979 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:58:03.116955 kubelet[2979]: E1123 22:58:03.116912 2979 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-18\" not found" node="ip-172-31-24-18" Nov 23 22:58:03.157176 kubelet[2979]: I1123 22:58:03.156803 2979 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-18" Nov 23 22:58:03.157176 kubelet[2979]: E1123 22:58:03.156857 2979 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-18\": node \"ip-172-31-24-18\" not found" Nov 23 22:58:03.163607 kubelet[2979]: I1123 22:58:03.162007 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-18" Nov 23 22:58:03.344246 kubelet[2979]: E1123 22:58:03.344201 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-18" Nov 23 22:58:03.344497 kubelet[2979]: I1123 22:58:03.344475 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:03.364504 kubelet[2979]: E1123 22:58:03.364435 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:03.364504 kubelet[2979]: I1123 22:58:03.364485 2979 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:03.376806 kubelet[2979]: E1123 22:58:03.376311 2979 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:04.035608 kubelet[2979]: I1123 22:58:04.035420 2979 apiserver.go:52] "Watching apiserver" Nov 23 22:58:04.062110 kubelet[2979]: I1123 22:58:04.062023 2979 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 22:58:05.234895 systemd[1]: Reload requested from client PID 3447 ('systemctl') (unit session-9.scope)... Nov 23 22:58:05.234926 systemd[1]: Reloading... Nov 23 22:58:05.517753 zram_generator::config[3503]: No configuration found. Nov 23 22:58:06.052579 systemd[1]: Reloading finished in 817 ms. Nov 23 22:58:06.097723 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:58:06.123210 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:58:06.124030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:58:06.124379 systemd[1]: kubelet.service: Consumed 1.603s CPU time, 123.2M memory peak. Nov 23 22:58:06.129694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:58:06.508856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:58:06.524618 (kubelet)[3551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:58:06.624617 kubelet[3551]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:58:06.624617 kubelet[3551]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:58:06.624617 kubelet[3551]: I1123 22:58:06.624310 3551 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:58:06.640627 kubelet[3551]: I1123 22:58:06.640113 3551 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 22:58:06.640627 kubelet[3551]: I1123 22:58:06.640182 3551 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:58:06.640627 kubelet[3551]: I1123 22:58:06.640243 3551 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 22:58:06.640627 kubelet[3551]: I1123 22:58:06.640257 3551 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:58:06.640927 kubelet[3551]: I1123 22:58:06.640709 3551 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:58:06.643299 kubelet[3551]: I1123 22:58:06.643233 3551 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 22:58:06.649819 kubelet[3551]: I1123 22:58:06.649755 3551 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:58:06.660246 kubelet[3551]: I1123 22:58:06.660188 3551 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:58:06.669841 kubelet[3551]: I1123 22:58:06.669794 3551 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 22:58:06.671131 kubelet[3551]: I1123 22:58:06.670734 3551 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:58:06.671131 kubelet[3551]: I1123 22:58:06.670789 3551 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:58:06.671131 kubelet[3551]: I1123 22:58:06.671066 3551 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:58:06.671131 kubelet[3551]: I1123 22:58:06.671088 3551 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 22:58:06.671514 kubelet[3551]: I1123 22:58:06.671130 3551 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 22:58:06.673121 kubelet[3551]: I1123 22:58:06.673069 3551 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:58:06.673881 kubelet[3551]: I1123 22:58:06.673398 3551 kubelet.go:475] "Attempting to sync node with API server" Nov 23 22:58:06.673881 kubelet[3551]: I1123 22:58:06.673454 3551 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:58:06.673881 kubelet[3551]: I1123 22:58:06.673496 3551 kubelet.go:387] "Adding apiserver pod source" Nov 23 22:58:06.673881 kubelet[3551]: I1123 22:58:06.673527 3551 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:58:06.677999 kubelet[3551]: I1123 22:58:06.677521 3551 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:58:06.684138 kubelet[3551]: I1123 22:58:06.682370 3551 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:58:06.684754 kubelet[3551]: I1123 22:58:06.684546 3551 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 22:58:06.714944 kubelet[3551]: I1123 22:58:06.713961 3551 server.go:1262] "Started kubelet" Nov 23 22:58:06.720344 kubelet[3551]: I1123 22:58:06.720295 3551 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:58:06.730984 kubelet[3551]: I1123 22:58:06.730356 3551 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:58:06.747483 kubelet[3551]: I1123 22:58:06.731060 3551 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:58:06.747680 kubelet[3551]: I1123 22:58:06.746418 3551 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 22:58:06.748621 kubelet[3551]: I1123 22:58:06.741421 3551 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 22:58:06.751401 kubelet[3551]: I1123 22:58:06.739493 3551 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:58:06.752548 kubelet[3551]: I1123 22:58:06.741442 3551 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 22:58:06.763068 kubelet[3551]: I1123 22:58:06.752949 3551 reconciler.go:29] "Reconciler: start to sync state" Nov 23 22:58:06.763068 kubelet[3551]: I1123 22:58:06.762885 3551 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:58:06.763068 kubelet[3551]: I1123 22:58:06.763055 3551 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:58:06.770912 kubelet[3551]: I1123 22:58:06.770861 3551 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:58:06.787862 kubelet[3551]: I1123 22:58:06.787802 3551 server.go:310] "Adding debug handlers to kubelet server" Nov 23 22:58:06.795050 kubelet[3551]: I1123 22:58:06.794981 3551 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:58:06.802572 kubelet[3551]: E1123 22:58:06.802489 3551 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:58:06.809045 kubelet[3551]: I1123 22:58:06.808864 3551 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 22:58:06.855899 kubelet[3551]: I1123 22:58:06.855845 3551 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 22:58:06.855899 kubelet[3551]: I1123 22:58:06.855890 3551 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 22:58:06.856105 kubelet[3551]: I1123 22:58:06.855926 3551 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 22:58:06.856105 kubelet[3551]: E1123 22:58:06.855994 3551 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:58:06.958234 kubelet[3551]: E1123 22:58:06.957377 3551 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 22:58:06.963026 kubelet[3551]: I1123 22:58:06.962990 3551 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:58:06.963416 kubelet[3551]: I1123 22:58:06.963298 3551 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:58:06.963606 kubelet[3551]: I1123 22:58:06.963514 3551 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:58:06.964241 kubelet[3551]: I1123 22:58:06.964028 3551 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 22:58:06.965529 kubelet[3551]: I1123 22:58:06.965457 3551 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 22:58:06.965770 kubelet[3551]: I1123 22:58:06.965744 3551 policy_none.go:49] "None policy: Start" Nov 23 22:58:06.965966 kubelet[3551]: I1123 22:58:06.965944 3551 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 22:58:06.966121 kubelet[3551]: I1123 22:58:06.966067 3551 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 22:58:06.966576 kubelet[3551]: I1123 22:58:06.966462 3551 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 23 22:58:06.966876 kubelet[3551]: I1123 22:58:06.966842 3551 policy_none.go:47] "Start" Nov 23 22:58:06.983299 kubelet[3551]: E1123 22:58:06.983265 3551 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:58:06.984906 kubelet[3551]: I1123 22:58:06.984827 3551 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:58:06.985174 kubelet[3551]: I1123 22:58:06.985098 3551 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:58:06.986289 kubelet[3551]: I1123 22:58:06.986177 3551 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:58:06.993837 kubelet[3551]: E1123 22:58:06.993400 3551 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:58:07.119048 kubelet[3551]: I1123 22:58:07.118965 3551 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-18" Nov 23 22:58:07.139787 kubelet[3551]: I1123 22:58:07.139732 3551 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-18" Nov 23 22:58:07.139934 kubelet[3551]: I1123 22:58:07.139901 3551 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-18" Nov 23 22:58:07.159795 kubelet[3551]: I1123 22:58:07.158652 3551 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:07.159795 kubelet[3551]: I1123 22:58:07.158829 3551 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-18" Nov 23 22:58:07.159795 kubelet[3551]: I1123 22:58:07.159149 3551 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:07.165964 kubelet[3551]: I1123 22:58:07.165889 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:07.165964 kubelet[3551]: I1123 22:58:07.165960 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:07.166140 kubelet[3551]: I1123 22:58:07.166004 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed2a312fa77b597b0b568bb88e3768d5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-18\" (UID: \"ed2a312fa77b597b0b568bb88e3768d5\") " pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:07.166140 kubelet[3551]: I1123 22:58:07.166039 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed2a312fa77b597b0b568bb88e3768d5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-18\" (UID: \"ed2a312fa77b597b0b568bb88e3768d5\") " pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:07.166276 kubelet[3551]: I1123 22:58:07.166133 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed2a312fa77b597b0b568bb88e3768d5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-18\" (UID: \"ed2a312fa77b597b0b568bb88e3768d5\") " pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:07.166786 kubelet[3551]: I1123 22:58:07.166324 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:07.166786 kubelet[3551]: I1123 22:58:07.166504 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b83410c5340756f0ea9514cc24a9f27-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-18\" (UID: \"7b83410c5340756f0ea9514cc24a9f27\") " pod="kube-system/kube-scheduler-ip-172-31-24-18" Nov 23 22:58:07.166786 kubelet[3551]: I1123 22:58:07.166619 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:07.166786 kubelet[3551]: I1123 22:58:07.166720 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52feb0dbad94177e727c2158a466c939-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-18\" (UID: \"52feb0dbad94177e727c2158a466c939\") " pod="kube-system/kube-controller-manager-ip-172-31-24-18" Nov 23 22:58:07.689070 kubelet[3551]: I1123 22:58:07.688986 3551 apiserver.go:52] "Watching apiserver" Nov 23 22:58:07.761574 kubelet[3551]: I1123 22:58:07.761513 3551 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 22:58:07.942710 kubelet[3551]: I1123 22:58:07.941559 3551 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:07.962285 kubelet[3551]: I1123 22:58:07.962161 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-18" podStartSLOduration=0.962142354 podStartE2EDuration="962.142354ms" podCreationTimestamp="2025-11-23 22:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:07.924492018 +0000 UTC m=+1.393061936" watchObservedRunningTime="2025-11-23 22:58:07.962142354 +0000 UTC m=+1.430712272" Nov 23 22:58:07.972332 kubelet[3551]: E1123 22:58:07.972273 3551 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-18\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-18" Nov 23 22:58:07.994430 kubelet[3551]: I1123 22:58:07.993862 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-18" podStartSLOduration=0.993837558 podStartE2EDuration="993.837558ms" podCreationTimestamp="2025-11-23 22:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:07.962373978 +0000 UTC m=+1.430943872" watchObservedRunningTime="2025-11-23 22:58:07.993837558 +0000 UTC m=+1.462407464" Nov 23 22:58:08.045623 kubelet[3551]: I1123 22:58:08.045466 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-18" podStartSLOduration=1.045443751 podStartE2EDuration="1.045443751s" podCreationTimestamp="2025-11-23 22:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:07.996644934 +0000 UTC m=+1.465214876" watchObservedRunningTime="2025-11-23 22:58:08.045443751 +0000 UTC m=+1.514013669" Nov 23 22:58:10.438149 kubelet[3551]: I1123 22:58:10.438100 3551 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 22:58:10.439316 containerd[1993]: time="2025-11-23T22:58:10.438644886Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 22:58:10.440056 kubelet[3551]: I1123 22:58:10.440018 3551 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 22:58:11.253126 systemd[1]: Created slice kubepods-besteffort-podf93be332_9c28_48d1_95b5_5d8142eb91b8.slice - libcontainer container kubepods-besteffort-podf93be332_9c28_48d1_95b5_5d8142eb91b8.slice. Nov 23 22:58:11.294102 kubelet[3551]: I1123 22:58:11.294033 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f93be332-9c28-48d1-95b5-5d8142eb91b8-xtables-lock\") pod \"kube-proxy-rqb65\" (UID: \"f93be332-9c28-48d1-95b5-5d8142eb91b8\") " pod="kube-system/kube-proxy-rqb65" Nov 23 22:58:11.294266 kubelet[3551]: I1123 22:58:11.294113 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f93be332-9c28-48d1-95b5-5d8142eb91b8-kube-proxy\") pod \"kube-proxy-rqb65\" (UID: \"f93be332-9c28-48d1-95b5-5d8142eb91b8\") " pod="kube-system/kube-proxy-rqb65" Nov 23 22:58:11.294266 kubelet[3551]: I1123 22:58:11.294151 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f93be332-9c28-48d1-95b5-5d8142eb91b8-lib-modules\") pod \"kube-proxy-rqb65\" (UID: \"f93be332-9c28-48d1-95b5-5d8142eb91b8\") " pod="kube-system/kube-proxy-rqb65" Nov 23 22:58:11.294266 kubelet[3551]: I1123 22:58:11.294191 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr257\" (UniqueName: \"kubernetes.io/projected/f93be332-9c28-48d1-95b5-5d8142eb91b8-kube-api-access-gr257\") pod \"kube-proxy-rqb65\" (UID: \"f93be332-9c28-48d1-95b5-5d8142eb91b8\") " pod="kube-system/kube-proxy-rqb65" Nov 23 22:58:11.409254 kubelet[3551]: E1123 22:58:11.409208 3551 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 23 22:58:11.409254 kubelet[3551]: E1123 22:58:11.409253 3551 projected.go:196] Error preparing data for projected volume kube-api-access-gr257 for pod kube-system/kube-proxy-rqb65: configmap "kube-root-ca.crt" not found Nov 23 22:58:11.409698 kubelet[3551]: E1123 22:58:11.409367 3551 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f93be332-9c28-48d1-95b5-5d8142eb91b8-kube-api-access-gr257 podName:f93be332-9c28-48d1-95b5-5d8142eb91b8 nodeName:}" failed. No retries permitted until 2025-11-23 22:58:11.909328839 +0000 UTC m=+5.377898745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gr257" (UniqueName: "kubernetes.io/projected/f93be332-9c28-48d1-95b5-5d8142eb91b8-kube-api-access-gr257") pod "kube-proxy-rqb65" (UID: "f93be332-9c28-48d1-95b5-5d8142eb91b8") : configmap "kube-root-ca.crt" not found Nov 23 22:58:11.654419 systemd[1]: Created slice kubepods-besteffort-poda1946201_d01e_494d_b4b6_2663716e7c01.slice - libcontainer container kubepods-besteffort-poda1946201_d01e_494d_b4b6_2663716e7c01.slice. Nov 23 22:58:11.695808 kubelet[3551]: I1123 22:58:11.695729 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42b9\" (UniqueName: \"kubernetes.io/projected/a1946201-d01e-494d-b4b6-2663716e7c01-kube-api-access-g42b9\") pod \"tigera-operator-65cdcdfd6d-bmnh8\" (UID: \"a1946201-d01e-494d-b4b6-2663716e7c01\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-bmnh8" Nov 23 22:58:11.695808 kubelet[3551]: I1123 22:58:11.695797 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1946201-d01e-494d-b4b6-2663716e7c01-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-bmnh8\" (UID: \"a1946201-d01e-494d-b4b6-2663716e7c01\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-bmnh8" Nov 23 22:58:11.967932 containerd[1993]: time="2025-11-23T22:58:11.966766198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-bmnh8,Uid:a1946201-d01e-494d-b4b6-2663716e7c01,Namespace:tigera-operator,Attempt:0,}" Nov 23 22:58:12.028145 containerd[1993]: time="2025-11-23T22:58:12.027930006Z" level=info msg="connecting to shim 5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa" address="unix:///run/containerd/s/4190ac2d4d0a0111daf8baa1d37e754228311c3c93a0c4e79b25a8d20255586e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:12.079919 systemd[1]: Started cri-containerd-5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa.scope - libcontainer container 5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa. Nov 23 22:58:12.156964 containerd[1993]: time="2025-11-23T22:58:12.156870775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-bmnh8,Uid:a1946201-d01e-494d-b4b6-2663716e7c01,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa\"" Nov 23 22:58:12.161054 containerd[1993]: time="2025-11-23T22:58:12.160991635Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 22:58:12.172556 containerd[1993]: time="2025-11-23T22:58:12.172477471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqb65,Uid:f93be332-9c28-48d1-95b5-5d8142eb91b8,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:12.208998 containerd[1993]: time="2025-11-23T22:58:12.208806067Z" level=info msg="connecting to shim c5d3b690adcb3330e96e0464f84d7358b11f38435d36239b762b1dcb1eb421b7" address="unix:///run/containerd/s/b16b7bd2e6adff2aa2a41ef794b75a4df152f923362fb0113b6288af590fe98e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:12.247889 systemd[1]: Started cri-containerd-c5d3b690adcb3330e96e0464f84d7358b11f38435d36239b762b1dcb1eb421b7.scope - libcontainer container c5d3b690adcb3330e96e0464f84d7358b11f38435d36239b762b1dcb1eb421b7. Nov 23 22:58:12.298738 containerd[1993]: time="2025-11-23T22:58:12.298664552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqb65,Uid:f93be332-9c28-48d1-95b5-5d8142eb91b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5d3b690adcb3330e96e0464f84d7358b11f38435d36239b762b1dcb1eb421b7\"" Nov 23 22:58:12.314773 containerd[1993]: time="2025-11-23T22:58:12.314714324Z" level=info msg="CreateContainer within sandbox \"c5d3b690adcb3330e96e0464f84d7358b11f38435d36239b762b1dcb1eb421b7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 22:58:12.334201 containerd[1993]: time="2025-11-23T22:58:12.333939896Z" level=info msg="Container 9c22d60655206d8e66d31da310e4edd73b5eff8b91bf4071f92229bf78866cfc: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:12.351324 containerd[1993]: time="2025-11-23T22:58:12.351269876Z" level=info msg="CreateContainer within sandbox \"c5d3b690adcb3330e96e0464f84d7358b11f38435d36239b762b1dcb1eb421b7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c22d60655206d8e66d31da310e4edd73b5eff8b91bf4071f92229bf78866cfc\"" Nov 23 22:58:12.352670 containerd[1993]: time="2025-11-23T22:58:12.352613936Z" level=info msg="StartContainer for \"9c22d60655206d8e66d31da310e4edd73b5eff8b91bf4071f92229bf78866cfc\"" Nov 23 22:58:12.360060 containerd[1993]: time="2025-11-23T22:58:12.359897816Z" level=info msg="connecting to shim 9c22d60655206d8e66d31da310e4edd73b5eff8b91bf4071f92229bf78866cfc" address="unix:///run/containerd/s/b16b7bd2e6adff2aa2a41ef794b75a4df152f923362fb0113b6288af590fe98e" protocol=ttrpc version=3 Nov 23 22:58:12.399257 systemd[1]: Started cri-containerd-9c22d60655206d8e66d31da310e4edd73b5eff8b91bf4071f92229bf78866cfc.scope - libcontainer container 9c22d60655206d8e66d31da310e4edd73b5eff8b91bf4071f92229bf78866cfc. Nov 23 22:58:12.515788 containerd[1993]: time="2025-11-23T22:58:12.514735689Z" level=info msg="StartContainer for \"9c22d60655206d8e66d31da310e4edd73b5eff8b91bf4071f92229bf78866cfc\" returns successfully" Nov 23 22:58:13.394815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166081592.mount: Deactivated successfully. Nov 23 22:58:14.148623 containerd[1993]: time="2025-11-23T22:58:14.148310709Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:14.151758 containerd[1993]: time="2025-11-23T22:58:14.151699353Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 22:58:14.155606 containerd[1993]: time="2025-11-23T22:58:14.153967725Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:14.158617 containerd[1993]: time="2025-11-23T22:58:14.158537241Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:14.160972 containerd[1993]: time="2025-11-23T22:58:14.160907613Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.999855246s" Nov 23 22:58:14.160972 containerd[1993]: time="2025-11-23T22:58:14.160962117Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 22:58:14.169641 containerd[1993]: time="2025-11-23T22:58:14.168852837Z" level=info msg="CreateContainer within sandbox \"5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 22:58:14.183199 containerd[1993]: time="2025-11-23T22:58:14.183148677Z" level=info msg="Container 892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:14.204925 containerd[1993]: time="2025-11-23T22:58:14.204867645Z" level=info msg="CreateContainer within sandbox \"5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb\"" Nov 23 22:58:14.205903 containerd[1993]: time="2025-11-23T22:58:14.205836513Z" level=info msg="StartContainer for \"892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb\"" Nov 23 22:58:14.209084 containerd[1993]: time="2025-11-23T22:58:14.208989861Z" level=info msg="connecting to shim 892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb" address="unix:///run/containerd/s/4190ac2d4d0a0111daf8baa1d37e754228311c3c93a0c4e79b25a8d20255586e" protocol=ttrpc version=3 Nov 23 22:58:14.245005 systemd[1]: Started cri-containerd-892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb.scope - libcontainer container 892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb. Nov 23 22:58:14.309408 containerd[1993]: time="2025-11-23T22:58:14.309257818Z" level=info msg="StartContainer for \"892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb\" returns successfully" Nov 23 22:58:14.997029 kubelet[3551]: I1123 22:58:14.996669 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rqb65" podStartSLOduration=3.996646321 podStartE2EDuration="3.996646321s" podCreationTimestamp="2025-11-23 22:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:12.995030219 +0000 UTC m=+6.463600149" watchObservedRunningTime="2025-11-23 22:58:14.996646321 +0000 UTC m=+8.465216227" Nov 23 22:58:17.928147 kubelet[3551]: I1123 22:58:17.927889 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-bmnh8" podStartSLOduration=4.925926386 podStartE2EDuration="6.927864604s" podCreationTimestamp="2025-11-23 22:58:11 +0000 UTC" firstStartedPulling="2025-11-23 22:58:12.160130359 +0000 UTC m=+5.628700265" lastFinishedPulling="2025-11-23 22:58:14.162068565 +0000 UTC m=+7.630638483" observedRunningTime="2025-11-23 22:58:14.998457889 +0000 UTC m=+8.467027819" watchObservedRunningTime="2025-11-23 22:58:17.927864604 +0000 UTC m=+11.396434522" Nov 23 22:58:21.399911 sudo[2396]: pam_unix(sudo:session): session closed for user root Nov 23 22:58:21.423189 sshd[2394]: Connection closed by 139.178.68.195 port 35988 Nov 23 22:58:21.424251 sshd-session[2380]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:21.434012 systemd[1]: sshd@8-172.31.24.18:22-139.178.68.195:35988.service: Deactivated successfully. Nov 23 22:58:21.442613 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 22:58:21.443268 systemd[1]: session-9.scope: Consumed 11.579s CPU time, 223M memory peak. Nov 23 22:58:21.446131 systemd-logind[1977]: Session 9 logged out. Waiting for processes to exit. Nov 23 22:58:21.451031 systemd-logind[1977]: Removed session 9. Nov 23 22:58:39.142086 systemd[1]: Created slice kubepods-besteffort-pod9bef7efd_4184_4730_a039_e1b10ca79748.slice - libcontainer container kubepods-besteffort-pod9bef7efd_4184_4730_a039_e1b10ca79748.slice. Nov 23 22:58:39.207044 kubelet[3551]: I1123 22:58:39.206929 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vht\" (UniqueName: \"kubernetes.io/projected/9bef7efd-4184-4730-a039-e1b10ca79748-kube-api-access-d9vht\") pod \"calico-typha-784968df64-w4pgg\" (UID: \"9bef7efd-4184-4730-a039-e1b10ca79748\") " pod="calico-system/calico-typha-784968df64-w4pgg" Nov 23 22:58:39.207044 kubelet[3551]: I1123 22:58:39.207031 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bef7efd-4184-4730-a039-e1b10ca79748-tigera-ca-bundle\") pod \"calico-typha-784968df64-w4pgg\" (UID: \"9bef7efd-4184-4730-a039-e1b10ca79748\") " pod="calico-system/calico-typha-784968df64-w4pgg" Nov 23 22:58:39.207844 kubelet[3551]: I1123 22:58:39.207082 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9bef7efd-4184-4730-a039-e1b10ca79748-typha-certs\") pod \"calico-typha-784968df64-w4pgg\" (UID: \"9bef7efd-4184-4730-a039-e1b10ca79748\") " pod="calico-system/calico-typha-784968df64-w4pgg" Nov 23 22:58:39.410539 systemd[1]: Created slice kubepods-besteffort-pode2f1ef19_961e_4d0a_9a4c_35f8cd00d32d.slice - libcontainer container kubepods-besteffort-pode2f1ef19_961e_4d0a_9a4c_35f8cd00d32d.slice. Nov 23 22:58:39.457925 containerd[1993]: time="2025-11-23T22:58:39.457858235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784968df64-w4pgg,Uid:9bef7efd-4184-4730-a039-e1b10ca79748,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:39.509796 containerd[1993]: time="2025-11-23T22:58:39.509363687Z" level=info msg="connecting to shim 7acd54c10a43b8d6c5e7841c8a2259c77044c908b8f4faac42aac4f9ec1fa0da" address="unix:///run/containerd/s/71b2cc3f7f7ee4c6d6fdd1c81ac4e600221e142f7832c8b91602167d0d75af35" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:39.510994 kubelet[3551]: I1123 22:58:39.510851 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-cni-bin-dir\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.510994 kubelet[3551]: I1123 22:58:39.510989 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-tigera-ca-bundle\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.511188 kubelet[3551]: I1123 22:58:39.511086 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-xtables-lock\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.511188 kubelet[3551]: I1123 22:58:39.511175 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-cni-net-dir\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.511300 kubelet[3551]: I1123 22:58:39.511212 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-var-run-calico\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.512265 kubelet[3551]: I1123 22:58:39.511300 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6pgd\" (UniqueName: \"kubernetes.io/projected/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-kube-api-access-z6pgd\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.512265 kubelet[3551]: I1123 22:58:39.511432 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-var-lib-calico\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.512265 kubelet[3551]: I1123 22:58:39.511550 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-node-certs\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.512265 kubelet[3551]: I1123 22:58:39.511672 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-policysync\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.512265 kubelet[3551]: I1123 22:58:39.511784 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-cni-log-dir\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.512608 kubelet[3551]: I1123 22:58:39.511928 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-flexvol-driver-host\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.512608 kubelet[3551]: I1123 22:58:39.512022 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d-lib-modules\") pod \"calico-node-tbn84\" (UID: \"e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d\") " pod="calico-system/calico-node-tbn84" Nov 23 22:58:39.580870 systemd[1]: Started cri-containerd-7acd54c10a43b8d6c5e7841c8a2259c77044c908b8f4faac42aac4f9ec1fa0da.scope - libcontainer container 7acd54c10a43b8d6c5e7841c8a2259c77044c908b8f4faac42aac4f9ec1fa0da. Nov 23 22:58:39.638632 kubelet[3551]: E1123 22:58:39.637705 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.638632 kubelet[3551]: W1123 22:58:39.637751 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.638632 kubelet[3551]: E1123 22:58:39.637789 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.679172 kubelet[3551]: E1123 22:58:39.679043 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.679172 kubelet[3551]: W1123 22:58:39.679079 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.679172 kubelet[3551]: E1123 22:58:39.679113 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.696955 kubelet[3551]: E1123 22:58:39.696891 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:58:39.727013 containerd[1993]: time="2025-11-23T22:58:39.726954876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tbn84,Uid:e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:39.765969 kubelet[3551]: E1123 22:58:39.765777 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.765969 kubelet[3551]: W1123 22:58:39.765852 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.766870 kubelet[3551]: E1123 22:58:39.765889 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.768258 kubelet[3551]: E1123 22:58:39.768129 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.769298 kubelet[3551]: W1123 22:58:39.768214 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.769298 kubelet[3551]: E1123 22:58:39.769061 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.771436 kubelet[3551]: E1123 22:58:39.771152 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.771436 kubelet[3551]: W1123 22:58:39.771189 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.771436 kubelet[3551]: E1123 22:58:39.771221 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.772021 kubelet[3551]: E1123 22:58:39.771870 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.772987 kubelet[3551]: W1123 22:58:39.772666 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.772987 kubelet[3551]: E1123 22:58:39.772736 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.775140 kubelet[3551]: E1123 22:58:39.774825 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.775140 kubelet[3551]: W1123 22:58:39.774887 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.775140 kubelet[3551]: E1123 22:58:39.774935 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.776920 kubelet[3551]: E1123 22:58:39.775768 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.776920 kubelet[3551]: W1123 22:58:39.776651 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.776920 kubelet[3551]: E1123 22:58:39.776701 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.777469 kubelet[3551]: E1123 22:58:39.777441 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.777642 kubelet[3551]: W1123 22:58:39.777569 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.778115 kubelet[3551]: E1123 22:58:39.777769 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.778569 containerd[1993]: time="2025-11-23T22:58:39.778490568Z" level=info msg="connecting to shim 11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6" address="unix:///run/containerd/s/7f76e112ba8a8d5691c245a32b6dfd20ed71b5d35eb808792a47b220bc8a8440" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:39.780331 kubelet[3551]: E1123 22:58:39.779905 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.780331 kubelet[3551]: W1123 22:58:39.779934 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.780331 kubelet[3551]: E1123 22:58:39.779983 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.780857 kubelet[3551]: E1123 22:58:39.780734 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.780857 kubelet[3551]: W1123 22:58:39.780791 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.781754 kubelet[3551]: E1123 22:58:39.780822 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.783377 kubelet[3551]: E1123 22:58:39.783339 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.783859 kubelet[3551]: W1123 22:58:39.783522 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.783859 kubelet[3551]: E1123 22:58:39.783679 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.785332 kubelet[3551]: E1123 22:58:39.784863 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.785332 kubelet[3551]: W1123 22:58:39.784899 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.785332 kubelet[3551]: E1123 22:58:39.784931 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.787363 kubelet[3551]: E1123 22:58:39.787050 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.787363 kubelet[3551]: W1123 22:58:39.787109 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.787363 kubelet[3551]: E1123 22:58:39.787143 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.787785 kubelet[3551]: E1123 22:58:39.787761 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.787910 kubelet[3551]: W1123 22:58:39.787886 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.788032 kubelet[3551]: E1123 22:58:39.788009 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.790417 kubelet[3551]: E1123 22:58:39.789932 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.790417 kubelet[3551]: W1123 22:58:39.789971 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.790417 kubelet[3551]: E1123 22:58:39.790003 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.791846 kubelet[3551]: E1123 22:58:39.791808 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.792300 kubelet[3551]: W1123 22:58:39.792032 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.792300 kubelet[3551]: E1123 22:58:39.792072 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.793660 kubelet[3551]: E1123 22:58:39.792776 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.794112 kubelet[3551]: W1123 22:58:39.793832 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.794112 kubelet[3551]: E1123 22:58:39.793884 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.794946 kubelet[3551]: E1123 22:58:39.794707 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.794946 kubelet[3551]: W1123 22:58:39.794736 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.794946 kubelet[3551]: E1123 22:58:39.794765 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.796491 kubelet[3551]: E1123 22:58:39.796452 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.797132 kubelet[3551]: W1123 22:58:39.796686 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.797132 kubelet[3551]: E1123 22:58:39.796727 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.798064 kubelet[3551]: E1123 22:58:39.797982 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.798817 kubelet[3551]: W1123 22:58:39.798346 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.798817 kubelet[3551]: E1123 22:58:39.798389 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.800952 kubelet[3551]: E1123 22:58:39.800228 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.800952 kubelet[3551]: W1123 22:58:39.800761 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.800952 kubelet[3551]: E1123 22:58:39.800806 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.814893 kubelet[3551]: E1123 22:58:39.814842 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.814893 kubelet[3551]: W1123 22:58:39.814885 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.815099 kubelet[3551]: E1123 22:58:39.814924 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.816633 kubelet[3551]: I1123 22:58:39.815344 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/caf53fdf-fed6-43b9-8878-f61f79709f6c-varrun\") pod \"csi-node-driver-ssk7t\" (UID: \"caf53fdf-fed6-43b9-8878-f61f79709f6c\") " pod="calico-system/csi-node-driver-ssk7t" Nov 23 22:58:39.818634 kubelet[3551]: E1123 22:58:39.818541 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.818789 kubelet[3551]: W1123 22:58:39.818735 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.818789 kubelet[3551]: E1123 22:58:39.818771 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.819981 kubelet[3551]: E1123 22:58:39.819909 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.819981 kubelet[3551]: W1123 22:58:39.819949 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.820210 kubelet[3551]: E1123 22:58:39.820007 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.820922 kubelet[3551]: E1123 22:58:39.820873 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.820922 kubelet[3551]: W1123 22:58:39.820910 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.821110 kubelet[3551]: E1123 22:58:39.820941 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.821110 kubelet[3551]: I1123 22:58:39.820998 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/caf53fdf-fed6-43b9-8878-f61f79709f6c-kubelet-dir\") pod \"csi-node-driver-ssk7t\" (UID: \"caf53fdf-fed6-43b9-8878-f61f79709f6c\") " pod="calico-system/csi-node-driver-ssk7t" Nov 23 22:58:39.823335 kubelet[3551]: E1123 22:58:39.823281 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.823335 kubelet[3551]: W1123 22:58:39.823324 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.823547 kubelet[3551]: E1123 22:58:39.823363 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.824346 kubelet[3551]: I1123 22:58:39.823742 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/caf53fdf-fed6-43b9-8878-f61f79709f6c-registration-dir\") pod \"csi-node-driver-ssk7t\" (UID: \"caf53fdf-fed6-43b9-8878-f61f79709f6c\") " pod="calico-system/csi-node-driver-ssk7t" Nov 23 22:58:39.825929 kubelet[3551]: E1123 22:58:39.825735 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.825929 kubelet[3551]: W1123 22:58:39.825916 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.827323 kubelet[3551]: E1123 22:58:39.825957 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.827323 kubelet[3551]: E1123 22:58:39.826812 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.827323 kubelet[3551]: W1123 22:58:39.826839 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.827323 kubelet[3551]: E1123 22:58:39.826870 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.829462 kubelet[3551]: E1123 22:58:39.829412 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.829462 kubelet[3551]: W1123 22:58:39.829451 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.829697 kubelet[3551]: E1123 22:58:39.829487 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.830756 kubelet[3551]: I1123 22:58:39.829564 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/caf53fdf-fed6-43b9-8878-f61f79709f6c-socket-dir\") pod \"csi-node-driver-ssk7t\" (UID: \"caf53fdf-fed6-43b9-8878-f61f79709f6c\") " pod="calico-system/csi-node-driver-ssk7t" Nov 23 22:58:39.831531 kubelet[3551]: E1123 22:58:39.831480 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.831531 kubelet[3551]: W1123 22:58:39.831520 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.832885 kubelet[3551]: E1123 22:58:39.831553 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.835418 kubelet[3551]: E1123 22:58:39.835357 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.835418 kubelet[3551]: W1123 22:58:39.835400 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.835664 kubelet[3551]: E1123 22:58:39.835435 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.836957 kubelet[3551]: E1123 22:58:39.836897 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.836957 kubelet[3551]: W1123 22:58:39.836946 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.837149 kubelet[3551]: E1123 22:58:39.836980 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.837149 kubelet[3551]: I1123 22:58:39.837024 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxrkh\" (UniqueName: \"kubernetes.io/projected/caf53fdf-fed6-43b9-8878-f61f79709f6c-kube-api-access-rxrkh\") pod \"csi-node-driver-ssk7t\" (UID: \"caf53fdf-fed6-43b9-8878-f61f79709f6c\") " pod="calico-system/csi-node-driver-ssk7t" Nov 23 22:58:39.838977 kubelet[3551]: E1123 22:58:39.838928 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.838977 kubelet[3551]: W1123 22:58:39.838965 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.839860 kubelet[3551]: E1123 22:58:39.838999 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.841773 kubelet[3551]: E1123 22:58:39.841715 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.841773 kubelet[3551]: W1123 22:58:39.841759 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.841973 kubelet[3551]: E1123 22:58:39.841793 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.844052 kubelet[3551]: E1123 22:58:39.844006 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.844052 kubelet[3551]: W1123 22:58:39.844041 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.844262 kubelet[3551]: E1123 22:58:39.844075 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.846620 kubelet[3551]: E1123 22:58:39.845352 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.846620 kubelet[3551]: W1123 22:58:39.845440 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.846620 kubelet[3551]: E1123 22:58:39.845473 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.890473 systemd[1]: Started cri-containerd-11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6.scope - libcontainer container 11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6. Nov 23 22:58:39.938805 kubelet[3551]: E1123 22:58:39.938628 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.938805 kubelet[3551]: W1123 22:58:39.938670 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.938805 kubelet[3551]: E1123 22:58:39.938703 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.940759 kubelet[3551]: E1123 22:58:39.940707 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.940759 kubelet[3551]: W1123 22:58:39.940747 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.941246 kubelet[3551]: E1123 22:58:39.940793 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.943319 kubelet[3551]: E1123 22:58:39.943189 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.943319 kubelet[3551]: W1123 22:58:39.943226 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.944437 kubelet[3551]: E1123 22:58:39.943953 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.947257 kubelet[3551]: E1123 22:58:39.947199 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.947988 kubelet[3551]: W1123 22:58:39.947230 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.947988 kubelet[3551]: E1123 22:58:39.947461 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.950819 kubelet[3551]: E1123 22:58:39.949865 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.950819 kubelet[3551]: W1123 22:58:39.950663 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.950819 kubelet[3551]: E1123 22:58:39.950704 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.952758 kubelet[3551]: E1123 22:58:39.952707 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.952758 kubelet[3551]: W1123 22:58:39.952746 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.952945 kubelet[3551]: E1123 22:58:39.952780 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.955726 kubelet[3551]: E1123 22:58:39.955673 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.955726 kubelet[3551]: W1123 22:58:39.955710 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.956025 kubelet[3551]: E1123 22:58:39.955744 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.956699 kubelet[3551]: E1123 22:58:39.956639 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.956699 kubelet[3551]: W1123 22:58:39.956676 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.956833 kubelet[3551]: E1123 22:58:39.956707 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.958741 kubelet[3551]: E1123 22:58:39.958691 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.958741 kubelet[3551]: W1123 22:58:39.958729 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.958921 kubelet[3551]: E1123 22:58:39.958765 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.959184 kubelet[3551]: E1123 22:58:39.959144 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.959184 kubelet[3551]: W1123 22:58:39.959175 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.959311 kubelet[3551]: E1123 22:58:39.959199 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.959663 kubelet[3551]: E1123 22:58:39.959630 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.959663 kubelet[3551]: W1123 22:58:39.959657 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.959793 kubelet[3551]: E1123 22:58:39.959680 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.962906 kubelet[3551]: E1123 22:58:39.962853 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.962906 kubelet[3551]: W1123 22:58:39.962893 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.963072 kubelet[3551]: E1123 22:58:39.962928 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.963447 kubelet[3551]: E1123 22:58:39.963406 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.963447 kubelet[3551]: W1123 22:58:39.963437 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.963447 kubelet[3551]: E1123 22:58:39.963463 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.964076 kubelet[3551]: E1123 22:58:39.964033 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.964076 kubelet[3551]: W1123 22:58:39.964068 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.964238 kubelet[3551]: E1123 22:58:39.964095 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.964939 kubelet[3551]: E1123 22:58:39.964869 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.964939 kubelet[3551]: W1123 22:58:39.964909 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.964939 kubelet[3551]: E1123 22:58:39.964941 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.966119 kubelet[3551]: E1123 22:58:39.966077 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.966119 kubelet[3551]: W1123 22:58:39.966121 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.966314 kubelet[3551]: E1123 22:58:39.966151 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.966744 kubelet[3551]: E1123 22:58:39.966702 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.966744 kubelet[3551]: W1123 22:58:39.966737 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.966902 kubelet[3551]: E1123 22:58:39.966766 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.968259 kubelet[3551]: E1123 22:58:39.968208 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.968259 kubelet[3551]: W1123 22:58:39.968245 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.968443 kubelet[3551]: E1123 22:58:39.968280 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.970040 kubelet[3551]: E1123 22:58:39.969988 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.970040 kubelet[3551]: W1123 22:58:39.970028 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.970385 kubelet[3551]: E1123 22:58:39.970064 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.971822 kubelet[3551]: E1123 22:58:39.971769 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.971822 kubelet[3551]: W1123 22:58:39.971809 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.971822 kubelet[3551]: E1123 22:58:39.971842 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.972475 kubelet[3551]: E1123 22:58:39.972438 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.972475 kubelet[3551]: W1123 22:58:39.972469 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.972657 kubelet[3551]: E1123 22:58:39.972494 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.974194 kubelet[3551]: E1123 22:58:39.974082 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.974194 kubelet[3551]: W1123 22:58:39.974120 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.974194 kubelet[3551]: E1123 22:58:39.974150 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.975731 kubelet[3551]: E1123 22:58:39.975673 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.975731 kubelet[3551]: W1123 22:58:39.975712 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.975951 kubelet[3551]: E1123 22:58:39.975745 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.979313 kubelet[3551]: E1123 22:58:39.979223 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.979313 kubelet[3551]: W1123 22:58:39.979293 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.979521 kubelet[3551]: E1123 22:58:39.979426 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:39.980535 kubelet[3551]: E1123 22:58:39.980127 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:39.980535 kubelet[3551]: W1123 22:58:39.980185 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:39.980535 kubelet[3551]: E1123 22:58:39.980216 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:40.001168 kubelet[3551]: E1123 22:58:40.001104 3551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:40.001168 kubelet[3551]: W1123 22:58:40.001145 3551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:40.001375 kubelet[3551]: E1123 22:58:40.001178 3551 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:40.152610 containerd[1993]: time="2025-11-23T22:58:40.152522986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tbn84,Uid:e2f1ef19-961e-4d0a-9a4c-35f8cd00d32d,Namespace:calico-system,Attempt:0,} returns sandbox id \"11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6\"" Nov 23 22:58:40.158397 containerd[1993]: time="2025-11-23T22:58:40.158339482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 22:58:40.235857 containerd[1993]: time="2025-11-23T22:58:40.234024730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-784968df64-w4pgg,Uid:9bef7efd-4184-4730-a039-e1b10ca79748,Namespace:calico-system,Attempt:0,} returns sandbox id \"7acd54c10a43b8d6c5e7841c8a2259c77044c908b8f4faac42aac4f9ec1fa0da\"" Nov 23 22:58:40.857290 kubelet[3551]: E1123 22:58:40.857157 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:58:41.248614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265787640.mount: Deactivated successfully. Nov 23 22:58:41.384174 containerd[1993]: time="2025-11-23T22:58:41.384083004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:41.386097 containerd[1993]: time="2025-11-23T22:58:41.386023896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Nov 23 22:58:41.388663 containerd[1993]: time="2025-11-23T22:58:41.388556796Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:41.392836 containerd[1993]: time="2025-11-23T22:58:41.392758656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:41.394177 containerd[1993]: time="2025-11-23T22:58:41.393964212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.235565582s" Nov 23 22:58:41.394177 containerd[1993]: time="2025-11-23T22:58:41.394019952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 22:58:41.398979 containerd[1993]: time="2025-11-23T22:58:41.397460556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 22:58:41.409169 containerd[1993]: time="2025-11-23T22:58:41.409103964Z" level=info msg="CreateContainer within sandbox \"11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 22:58:41.431610 containerd[1993]: time="2025-11-23T22:58:41.429343176Z" level=info msg="Container e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:41.443578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766520522.mount: Deactivated successfully. Nov 23 22:58:41.455772 containerd[1993]: time="2025-11-23T22:58:41.455671681Z" level=info msg="CreateContainer within sandbox \"11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3\"" Nov 23 22:58:41.459643 containerd[1993]: time="2025-11-23T22:58:41.458856997Z" level=info msg="StartContainer for \"e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3\"" Nov 23 22:58:41.462780 containerd[1993]: time="2025-11-23T22:58:41.462327757Z" level=info msg="connecting to shim e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3" address="unix:///run/containerd/s/7f76e112ba8a8d5691c245a32b6dfd20ed71b5d35eb808792a47b220bc8a8440" protocol=ttrpc version=3 Nov 23 22:58:41.499887 systemd[1]: Started cri-containerd-e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3.scope - libcontainer container e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3. Nov 23 22:58:41.617333 containerd[1993]: time="2025-11-23T22:58:41.617286889Z" level=info msg="StartContainer for \"e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3\" returns successfully" Nov 23 22:58:41.671567 systemd[1]: cri-containerd-e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3.scope: Deactivated successfully. Nov 23 22:58:41.682905 containerd[1993]: time="2025-11-23T22:58:41.682768886Z" level=info msg="received container exit event container_id:\"e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3\" id:\"e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3\" pid:4152 exited_at:{seconds:1763938721 nanos:681036338}" Nov 23 22:58:41.728462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2f90bed6ecff7d59dd1384e7aae4c41805eabd6ce5a0ce753a8eaddb5c46dd3-rootfs.mount: Deactivated successfully. Nov 23 22:58:42.856734 kubelet[3551]: E1123 22:58:42.856525 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:58:43.411147 containerd[1993]: time="2025-11-23T22:58:43.411091874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:43.413850 containerd[1993]: time="2025-11-23T22:58:43.413809826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Nov 23 22:58:43.416422 containerd[1993]: time="2025-11-23T22:58:43.416269622Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:43.422002 containerd[1993]: time="2025-11-23T22:58:43.421933646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:43.423538 containerd[1993]: time="2025-11-23T22:58:43.423497774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.025975034s" Nov 23 22:58:43.423737 containerd[1993]: time="2025-11-23T22:58:43.423708398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 22:58:43.425912 containerd[1993]: time="2025-11-23T22:58:43.425856602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 22:58:43.456827 containerd[1993]: time="2025-11-23T22:58:43.456775178Z" level=info msg="CreateContainer within sandbox \"7acd54c10a43b8d6c5e7841c8a2259c77044c908b8f4faac42aac4f9ec1fa0da\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 22:58:43.474613 containerd[1993]: time="2025-11-23T22:58:43.474263199Z" level=info msg="Container d5d54ac74692c14bbbbf6bf8eeca5c0aff1ed4bf0859506da3292cdfee58b5ab: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:43.483539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970768637.mount: Deactivated successfully. Nov 23 22:58:43.498001 containerd[1993]: time="2025-11-23T22:58:43.497926023Z" level=info msg="CreateContainer within sandbox \"7acd54c10a43b8d6c5e7841c8a2259c77044c908b8f4faac42aac4f9ec1fa0da\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d5d54ac74692c14bbbbf6bf8eeca5c0aff1ed4bf0859506da3292cdfee58b5ab\"" Nov 23 22:58:43.500751 containerd[1993]: time="2025-11-23T22:58:43.500202003Z" level=info msg="StartContainer for \"d5d54ac74692c14bbbbf6bf8eeca5c0aff1ed4bf0859506da3292cdfee58b5ab\"" Nov 23 22:58:43.503077 containerd[1993]: time="2025-11-23T22:58:43.502932183Z" level=info msg="connecting to shim d5d54ac74692c14bbbbf6bf8eeca5c0aff1ed4bf0859506da3292cdfee58b5ab" address="unix:///run/containerd/s/71b2cc3f7f7ee4c6d6fdd1c81ac4e600221e142f7832c8b91602167d0d75af35" protocol=ttrpc version=3 Nov 23 22:58:43.552891 systemd[1]: Started cri-containerd-d5d54ac74692c14bbbbf6bf8eeca5c0aff1ed4bf0859506da3292cdfee58b5ab.scope - libcontainer container d5d54ac74692c14bbbbf6bf8eeca5c0aff1ed4bf0859506da3292cdfee58b5ab. Nov 23 22:58:43.651466 containerd[1993]: time="2025-11-23T22:58:43.651383079Z" level=info msg="StartContainer for \"d5d54ac74692c14bbbbf6bf8eeca5c0aff1ed4bf0859506da3292cdfee58b5ab\" returns successfully" Nov 23 22:58:44.133670 kubelet[3551]: I1123 22:58:44.133453 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-784968df64-w4pgg" podStartSLOduration=1.947175098 podStartE2EDuration="5.13339637s" podCreationTimestamp="2025-11-23 22:58:39 +0000 UTC" firstStartedPulling="2025-11-23 22:58:40.239184118 +0000 UTC m=+33.707754024" lastFinishedPulling="2025-11-23 22:58:43.425405402 +0000 UTC m=+36.893975296" observedRunningTime="2025-11-23 22:58:44.132561494 +0000 UTC m=+37.601131424" watchObservedRunningTime="2025-11-23 22:58:44.13339637 +0000 UTC m=+37.601966276" Nov 23 22:58:44.857313 kubelet[3551]: E1123 22:58:44.857008 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:58:46.495637 containerd[1993]: time="2025-11-23T22:58:46.495551070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:46.497564 containerd[1993]: time="2025-11-23T22:58:46.497513238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 22:58:46.500089 containerd[1993]: time="2025-11-23T22:58:46.500012118Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:46.505012 containerd[1993]: time="2025-11-23T22:58:46.504926826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:46.507628 containerd[1993]: time="2025-11-23T22:58:46.507090810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.081172864s" Nov 23 22:58:46.507628 containerd[1993]: time="2025-11-23T22:58:46.507157230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 22:58:46.520254 containerd[1993]: time="2025-11-23T22:58:46.519904650Z" level=info msg="CreateContainer within sandbox \"11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 22:58:46.550948 containerd[1993]: time="2025-11-23T22:58:46.550876854Z" level=info msg="Container 656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:46.573982 containerd[1993]: time="2025-11-23T22:58:46.573917022Z" level=info msg="CreateContainer within sandbox \"11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162\"" Nov 23 22:58:46.575497 containerd[1993]: time="2025-11-23T22:58:46.575435886Z" level=info msg="StartContainer for \"656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162\"" Nov 23 22:58:46.578624 containerd[1993]: time="2025-11-23T22:58:46.578455338Z" level=info msg="connecting to shim 656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162" address="unix:///run/containerd/s/7f76e112ba8a8d5691c245a32b6dfd20ed71b5d35eb808792a47b220bc8a8440" protocol=ttrpc version=3 Nov 23 22:58:46.626899 systemd[1]: Started cri-containerd-656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162.scope - libcontainer container 656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162. Nov 23 22:58:46.755844 containerd[1993]: time="2025-11-23T22:58:46.754338211Z" level=info msg="StartContainer for \"656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162\" returns successfully" Nov 23 22:58:46.856333 kubelet[3551]: E1123 22:58:46.856233 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:58:47.849423 containerd[1993]: time="2025-11-23T22:58:47.849272588Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:58:47.854192 systemd[1]: cri-containerd-656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162.scope: Deactivated successfully. Nov 23 22:58:47.855222 systemd[1]: cri-containerd-656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162.scope: Consumed 926ms CPU time, 185.4M memory peak, 165.9M written to disk. Nov 23 22:58:47.858727 kubelet[3551]: E1123 22:58:47.857062 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:58:47.865055 containerd[1993]: time="2025-11-23T22:58:47.864164336Z" level=info msg="received container exit event container_id:\"656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162\" id:\"656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162\" pid:4261 exited_at:{seconds:1763938727 nanos:863642072}" Nov 23 22:58:47.910506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-656f6150a2093881c46a4ac0cc8935c5b5e8558ad19bcb3e7c1cb48067865162-rootfs.mount: Deactivated successfully. Nov 23 22:58:47.921240 kubelet[3551]: I1123 22:58:47.921176 3551 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 23 22:58:48.072257 systemd[1]: Created slice kubepods-besteffort-pod7e5adfe5_9d16_48ee_9a26_9bc3918748b8.slice - libcontainer container kubepods-besteffort-pod7e5adfe5_9d16_48ee_9a26_9bc3918748b8.slice. Nov 23 22:58:48.134607 kubelet[3551]: I1123 22:58:48.134454 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-backend-key-pair\") pod \"whisker-745c74794-nx2nx\" (UID: \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\") " pod="calico-system/whisker-745c74794-nx2nx" Nov 23 22:58:48.135754 kubelet[3551]: I1123 22:58:48.135684 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6927t\" (UniqueName: \"kubernetes.io/projected/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-kube-api-access-6927t\") pod \"whisker-745c74794-nx2nx\" (UID: \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\") " pod="calico-system/whisker-745c74794-nx2nx" Nov 23 22:58:48.154437 kubelet[3551]: I1123 22:58:48.137641 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-ca-bundle\") pod \"whisker-745c74794-nx2nx\" (UID: \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\") " pod="calico-system/whisker-745c74794-nx2nx" Nov 23 22:58:48.142184 systemd[1]: Created slice kubepods-burstable-podf5f68273_555b_4e0e_98f1_1cec4181626f.slice - libcontainer container kubepods-burstable-podf5f68273_555b_4e0e_98f1_1cec4181626f.slice. Nov 23 22:58:48.185819 systemd[1]: Created slice kubepods-burstable-pode1f14fe2_00f7_46fe_a466_aade4137bcc9.slice - libcontainer container kubepods-burstable-pode1f14fe2_00f7_46fe_a466_aade4137bcc9.slice. Nov 23 22:58:48.238525 kubelet[3551]: I1123 22:58:48.238349 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1f14fe2-00f7-46fe-a466-aade4137bcc9-config-volume\") pod \"coredns-66bc5c9577-s6g9f\" (UID: \"e1f14fe2-00f7-46fe-a466-aade4137bcc9\") " pod="kube-system/coredns-66bc5c9577-s6g9f" Nov 23 22:58:48.278872 kubelet[3551]: I1123 22:58:48.239838 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzgjr\" (UniqueName: \"kubernetes.io/projected/f5f68273-555b-4e0e-98f1-1cec4181626f-kube-api-access-vzgjr\") pod \"coredns-66bc5c9577-9hk4l\" (UID: \"f5f68273-555b-4e0e-98f1-1cec4181626f\") " pod="kube-system/coredns-66bc5c9577-9hk4l" Nov 23 22:58:48.278872 kubelet[3551]: I1123 22:58:48.239954 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4cd\" (UniqueName: \"kubernetes.io/projected/e1f14fe2-00f7-46fe-a466-aade4137bcc9-kube-api-access-zk4cd\") pod \"coredns-66bc5c9577-s6g9f\" (UID: \"e1f14fe2-00f7-46fe-a466-aade4137bcc9\") " pod="kube-system/coredns-66bc5c9577-s6g9f" Nov 23 22:58:48.278872 kubelet[3551]: I1123 22:58:48.240086 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5f68273-555b-4e0e-98f1-1cec4181626f-config-volume\") pod \"coredns-66bc5c9577-9hk4l\" (UID: \"f5f68273-555b-4e0e-98f1-1cec4181626f\") " pod="kube-system/coredns-66bc5c9577-9hk4l" Nov 23 22:58:48.240452 systemd[1]: Created slice kubepods-besteffort-poded04d8fd_f316_436f_a1ef_e581bd3f494a.slice - libcontainer container kubepods-besteffort-poded04d8fd_f316_436f_a1ef_e581bd3f494a.slice. Nov 23 22:58:48.341178 kubelet[3551]: I1123 22:58:48.340704 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24g76\" (UniqueName: \"kubernetes.io/projected/ed04d8fd-f316-436f-a1ef-e581bd3f494a-kube-api-access-24g76\") pod \"calico-kube-controllers-7f8f8556cf-9rc8j\" (UID: \"ed04d8fd-f316-436f-a1ef-e581bd3f494a\") " pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" Nov 23 22:58:48.341178 kubelet[3551]: I1123 22:58:48.340883 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed04d8fd-f316-436f-a1ef-e581bd3f494a-tigera-ca-bundle\") pod \"calico-kube-controllers-7f8f8556cf-9rc8j\" (UID: \"ed04d8fd-f316-436f-a1ef-e581bd3f494a\") " pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" Nov 23 22:58:48.371876 systemd[1]: Created slice kubepods-besteffort-pode89989bc_946c_40b9_a2fe_b6be9daeb141.slice - libcontainer container kubepods-besteffort-pode89989bc_946c_40b9_a2fe_b6be9daeb141.slice. Nov 23 22:58:48.435006 systemd[1]: Created slice kubepods-besteffort-pod8dcbc37e_9145_4538_9ae4_0ee44fb84086.slice - libcontainer container kubepods-besteffort-pod8dcbc37e_9145_4538_9ae4_0ee44fb84086.slice. Nov 23 22:58:48.441935 kubelet[3551]: I1123 22:58:48.441336 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4tgs\" (UniqueName: \"kubernetes.io/projected/e89989bc-946c-40b9-a2fe-b6be9daeb141-kube-api-access-j4tgs\") pod \"calico-apiserver-786df89cbb-lh757\" (UID: \"e89989bc-946c-40b9-a2fe-b6be9daeb141\") " pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" Nov 23 22:58:48.445083 kubelet[3551]: I1123 22:58:48.444724 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e89989bc-946c-40b9-a2fe-b6be9daeb141-calico-apiserver-certs\") pod \"calico-apiserver-786df89cbb-lh757\" (UID: \"e89989bc-946c-40b9-a2fe-b6be9daeb141\") " pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" Nov 23 22:58:48.498439 containerd[1993]: time="2025-11-23T22:58:48.497969623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745c74794-nx2nx,Uid:7e5adfe5-9d16-48ee-9a26-9bc3918748b8,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:48.516405 systemd[1]: Created slice kubepods-besteffort-podbb092924_5640_4734_8a43_16aa063b77ae.slice - libcontainer container kubepods-besteffort-podbb092924_5640_4734_8a43_16aa063b77ae.slice. Nov 23 22:58:48.546105 kubelet[3551]: I1123 22:58:48.545545 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8dcbc37e-9145-4538-9ae4-0ee44fb84086-calico-apiserver-certs\") pod \"calico-apiserver-786df89cbb-lrpdp\" (UID: \"8dcbc37e-9145-4538-9ae4-0ee44fb84086\") " pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" Nov 23 22:58:48.546105 kubelet[3551]: I1123 22:58:48.545649 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97mdm\" (UniqueName: \"kubernetes.io/projected/8dcbc37e-9145-4538-9ae4-0ee44fb84086-kube-api-access-97mdm\") pod \"calico-apiserver-786df89cbb-lrpdp\" (UID: \"8dcbc37e-9145-4538-9ae4-0ee44fb84086\") " pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" Nov 23 22:58:48.596045 containerd[1993]: time="2025-11-23T22:58:48.595754528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hk4l,Uid:f5f68273-555b-4e0e-98f1-1cec4181626f,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:48.647353 kubelet[3551]: I1123 22:58:48.646663 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb092924-5640-4734-8a43-16aa063b77ae-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-8jgv8\" (UID: \"bb092924-5640-4734-8a43-16aa063b77ae\") " pod="calico-system/goldmane-7c778bb748-8jgv8" Nov 23 22:58:48.647353 kubelet[3551]: I1123 22:58:48.646732 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8c7p\" (UniqueName: \"kubernetes.io/projected/bb092924-5640-4734-8a43-16aa063b77ae-kube-api-access-b8c7p\") pod \"goldmane-7c778bb748-8jgv8\" (UID: \"bb092924-5640-4734-8a43-16aa063b77ae\") " pod="calico-system/goldmane-7c778bb748-8jgv8" Nov 23 22:58:48.647353 kubelet[3551]: I1123 22:58:48.646795 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb092924-5640-4734-8a43-16aa063b77ae-config\") pod \"goldmane-7c778bb748-8jgv8\" (UID: \"bb092924-5640-4734-8a43-16aa063b77ae\") " pod="calico-system/goldmane-7c778bb748-8jgv8" Nov 23 22:58:48.648160 kubelet[3551]: I1123 22:58:48.647409 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bb092924-5640-4734-8a43-16aa063b77ae-goldmane-key-pair\") pod \"goldmane-7c778bb748-8jgv8\" (UID: \"bb092924-5640-4734-8a43-16aa063b77ae\") " pod="calico-system/goldmane-7c778bb748-8jgv8" Nov 23 22:58:48.656074 containerd[1993]: time="2025-11-23T22:58:48.655729304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s6g9f,Uid:e1f14fe2-00f7-46fe-a466-aade4137bcc9,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:48.677376 systemd[1]: Created slice kubepods-besteffort-pod1ce6cd5f_eadc_464f_af0e_bacaebe7e59a.slice - libcontainer container kubepods-besteffort-pod1ce6cd5f_eadc_464f_af0e_bacaebe7e59a.slice. Nov 23 22:58:48.740663 containerd[1993]: time="2025-11-23T22:58:48.740504589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8f8556cf-9rc8j,Uid:ed04d8fd-f316-436f-a1ef-e581bd3f494a,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:48.750836 kubelet[3551]: I1123 22:58:48.749831 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ce6cd5f-eadc-464f-af0e-bacaebe7e59a-calico-apiserver-certs\") pod \"calico-apiserver-855cfc6487-7qv5g\" (UID: \"1ce6cd5f-eadc-464f-af0e-bacaebe7e59a\") " pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" Nov 23 22:58:48.750836 kubelet[3551]: I1123 22:58:48.749912 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rprl\" (UniqueName: \"kubernetes.io/projected/1ce6cd5f-eadc-464f-af0e-bacaebe7e59a-kube-api-access-9rprl\") pod \"calico-apiserver-855cfc6487-7qv5g\" (UID: \"1ce6cd5f-eadc-464f-af0e-bacaebe7e59a\") " pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" Nov 23 22:58:48.808636 containerd[1993]: time="2025-11-23T22:58:48.807941061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lh757,Uid:e89989bc-946c-40b9-a2fe-b6be9daeb141,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:58:48.834941 containerd[1993]: time="2025-11-23T22:58:48.834855801Z" level=error msg="Failed to destroy network for sandbox \"8d484a0c3df2bea345959e2e2370ed06f802a76d14f55c050b3f0931daab8d8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:48.914314 containerd[1993]: time="2025-11-23T22:58:48.914233114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lrpdp,Uid:8dcbc37e-9145-4538-9ae4-0ee44fb84086,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:58:48.926709 containerd[1993]: time="2025-11-23T22:58:48.926624050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8jgv8,Uid:bb092924-5640-4734-8a43-16aa063b77ae,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:48.951937 containerd[1993]: time="2025-11-23T22:58:48.951009430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-745c74794-nx2nx,Uid:7e5adfe5-9d16-48ee-9a26-9bc3918748b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d484a0c3df2bea345959e2e2370ed06f802a76d14f55c050b3f0931daab8d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:48.956391 kubelet[3551]: E1123 22:58:48.955786 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d484a0c3df2bea345959e2e2370ed06f802a76d14f55c050b3f0931daab8d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:48.956391 kubelet[3551]: E1123 22:58:48.955892 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d484a0c3df2bea345959e2e2370ed06f802a76d14f55c050b3f0931daab8d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745c74794-nx2nx" Nov 23 22:58:48.956391 kubelet[3551]: E1123 22:58:48.955943 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d484a0c3df2bea345959e2e2370ed06f802a76d14f55c050b3f0931daab8d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-745c74794-nx2nx" Nov 23 22:58:48.959549 kubelet[3551]: E1123 22:58:48.956019 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-745c74794-nx2nx_calico-system(7e5adfe5-9d16-48ee-9a26-9bc3918748b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-745c74794-nx2nx_calico-system(7e5adfe5-9d16-48ee-9a26-9bc3918748b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d484a0c3df2bea345959e2e2370ed06f802a76d14f55c050b3f0931daab8d8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-745c74794-nx2nx" podUID="7e5adfe5-9d16-48ee-9a26-9bc3918748b8" Nov 23 22:58:49.025768 containerd[1993]: time="2025-11-23T22:58:49.025348278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cfc6487-7qv5g,Uid:1ce6cd5f-eadc-464f-af0e-bacaebe7e59a,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:58:49.166563 containerd[1993]: time="2025-11-23T22:58:49.166491703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 22:58:49.256975 containerd[1993]: time="2025-11-23T22:58:49.256901431Z" level=error msg="Failed to destroy network for sandbox \"0f5f14fef27a25bb896c9ecad9e77513756e54bc526eea0b5572c79422534bd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.263043 containerd[1993]: time="2025-11-23T22:58:49.262958695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s6g9f,Uid:e1f14fe2-00f7-46fe-a466-aade4137bcc9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5f14fef27a25bb896c9ecad9e77513756e54bc526eea0b5572c79422534bd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.264096 kubelet[3551]: E1123 22:58:49.263285 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5f14fef27a25bb896c9ecad9e77513756e54bc526eea0b5572c79422534bd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.264096 kubelet[3551]: E1123 22:58:49.263356 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5f14fef27a25bb896c9ecad9e77513756e54bc526eea0b5572c79422534bd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s6g9f" Nov 23 22:58:49.264096 kubelet[3551]: E1123 22:58:49.263388 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5f14fef27a25bb896c9ecad9e77513756e54bc526eea0b5572c79422534bd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-s6g9f" Nov 23 22:58:49.264337 kubelet[3551]: E1123 22:58:49.263467 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-s6g9f_kube-system(e1f14fe2-00f7-46fe-a466-aade4137bcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-s6g9f_kube-system(e1f14fe2-00f7-46fe-a466-aade4137bcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f5f14fef27a25bb896c9ecad9e77513756e54bc526eea0b5572c79422534bd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-s6g9f" podUID="e1f14fe2-00f7-46fe-a466-aade4137bcc9" Nov 23 22:58:49.327234 containerd[1993]: time="2025-11-23T22:58:49.327156200Z" level=error msg="Failed to destroy network for sandbox \"e11828adc975a218d566906494b5773a8e78269a469d50346a19b1f8accf9620\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.331511 containerd[1993]: time="2025-11-23T22:58:49.331406840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cfc6487-7qv5g,Uid:1ce6cd5f-eadc-464f-af0e-bacaebe7e59a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11828adc975a218d566906494b5773a8e78269a469d50346a19b1f8accf9620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.332820 kubelet[3551]: E1123 22:58:49.332341 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11828adc975a218d566906494b5773a8e78269a469d50346a19b1f8accf9620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.332820 kubelet[3551]: E1123 22:58:49.332442 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11828adc975a218d566906494b5773a8e78269a469d50346a19b1f8accf9620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" Nov 23 22:58:49.332820 kubelet[3551]: E1123 22:58:49.332501 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11828adc975a218d566906494b5773a8e78269a469d50346a19b1f8accf9620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" Nov 23 22:58:49.333622 kubelet[3551]: E1123 22:58:49.332657 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855cfc6487-7qv5g_calico-apiserver(1ce6cd5f-eadc-464f-af0e-bacaebe7e59a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855cfc6487-7qv5g_calico-apiserver(1ce6cd5f-eadc-464f-af0e-bacaebe7e59a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e11828adc975a218d566906494b5773a8e78269a469d50346a19b1f8accf9620\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:58:49.358514 containerd[1993]: time="2025-11-23T22:58:49.358435880Z" level=error msg="Failed to destroy network for sandbox \"719dc2e2455e29ceedd34680a266d9455432bff9499d6f233340212829f57005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.362778 containerd[1993]: time="2025-11-23T22:58:49.362357156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hk4l,Uid:f5f68273-555b-4e0e-98f1-1cec4181626f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"719dc2e2455e29ceedd34680a266d9455432bff9499d6f233340212829f57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.364076 kubelet[3551]: E1123 22:58:49.363972 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719dc2e2455e29ceedd34680a266d9455432bff9499d6f233340212829f57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.364473 kubelet[3551]: E1123 22:58:49.364305 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719dc2e2455e29ceedd34680a266d9455432bff9499d6f233340212829f57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9hk4l" Nov 23 22:58:49.365312 kubelet[3551]: E1123 22:58:49.364445 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719dc2e2455e29ceedd34680a266d9455432bff9499d6f233340212829f57005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9hk4l" Nov 23 22:58:49.366165 kubelet[3551]: E1123 22:58:49.364958 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9hk4l_kube-system(f5f68273-555b-4e0e-98f1-1cec4181626f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9hk4l_kube-system(f5f68273-555b-4e0e-98f1-1cec4181626f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"719dc2e2455e29ceedd34680a266d9455432bff9499d6f233340212829f57005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9hk4l" podUID="f5f68273-555b-4e0e-98f1-1cec4181626f" Nov 23 22:58:49.397327 containerd[1993]: time="2025-11-23T22:58:49.397263572Z" level=error msg="Failed to destroy network for sandbox \"2a73360886a37a18b19665876b34d05349b030f961bcf6062ab88e95beef0291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.402494 containerd[1993]: time="2025-11-23T22:58:49.402302804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8jgv8,Uid:bb092924-5640-4734-8a43-16aa063b77ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a73360886a37a18b19665876b34d05349b030f961bcf6062ab88e95beef0291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.403090 kubelet[3551]: E1123 22:58:49.402684 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a73360886a37a18b19665876b34d05349b030f961bcf6062ab88e95beef0291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.403090 kubelet[3551]: E1123 22:58:49.402758 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a73360886a37a18b19665876b34d05349b030f961bcf6062ab88e95beef0291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8jgv8" Nov 23 22:58:49.403090 kubelet[3551]: E1123 22:58:49.402885 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a73360886a37a18b19665876b34d05349b030f961bcf6062ab88e95beef0291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-8jgv8" Nov 23 22:58:49.403748 kubelet[3551]: E1123 22:58:49.402983 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-8jgv8_calico-system(bb092924-5640-4734-8a43-16aa063b77ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-8jgv8_calico-system(bb092924-5640-4734-8a43-16aa063b77ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a73360886a37a18b19665876b34d05349b030f961bcf6062ab88e95beef0291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 22:58:49.425939 containerd[1993]: time="2025-11-23T22:58:49.425842724Z" level=error msg="Failed to destroy network for sandbox \"e88b4f4d9b163545615e90c9d6aedab581d50fda66b123d605a9504a6d880891\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.430245 containerd[1993]: time="2025-11-23T22:58:49.429804644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lh757,Uid:e89989bc-946c-40b9-a2fe-b6be9daeb141,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88b4f4d9b163545615e90c9d6aedab581d50fda66b123d605a9504a6d880891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.431426 kubelet[3551]: E1123 22:58:49.430162 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88b4f4d9b163545615e90c9d6aedab581d50fda66b123d605a9504a6d880891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.431426 kubelet[3551]: E1123 22:58:49.430255 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88b4f4d9b163545615e90c9d6aedab581d50fda66b123d605a9504a6d880891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" Nov 23 22:58:49.431426 kubelet[3551]: E1123 22:58:49.430288 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e88b4f4d9b163545615e90c9d6aedab581d50fda66b123d605a9504a6d880891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" Nov 23 22:58:49.432229 kubelet[3551]: E1123 22:58:49.430379 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786df89cbb-lh757_calico-apiserver(e89989bc-946c-40b9-a2fe-b6be9daeb141)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786df89cbb-lh757_calico-apiserver(e89989bc-946c-40b9-a2fe-b6be9daeb141)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e88b4f4d9b163545615e90c9d6aedab581d50fda66b123d605a9504a6d880891\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 22:58:49.432744 containerd[1993]: time="2025-11-23T22:58:49.432656084Z" level=error msg="Failed to destroy network for sandbox \"873b2bbf73db62f4a9d63caacb03fd8c18dbfa697dcc8babb245194850ca0fe2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.436516 containerd[1993]: time="2025-11-23T22:58:49.435534704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8f8556cf-9rc8j,Uid:ed04d8fd-f316-436f-a1ef-e581bd3f494a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"873b2bbf73db62f4a9d63caacb03fd8c18dbfa697dcc8babb245194850ca0fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.437550 kubelet[3551]: E1123 22:58:49.436987 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873b2bbf73db62f4a9d63caacb03fd8c18dbfa697dcc8babb245194850ca0fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.437550 kubelet[3551]: E1123 22:58:49.437058 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873b2bbf73db62f4a9d63caacb03fd8c18dbfa697dcc8babb245194850ca0fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" Nov 23 22:58:49.437550 kubelet[3551]: E1123 22:58:49.437092 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873b2bbf73db62f4a9d63caacb03fd8c18dbfa697dcc8babb245194850ca0fe2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" Nov 23 22:58:49.438296 kubelet[3551]: E1123 22:58:49.437177 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f8f8556cf-9rc8j_calico-system(ed04d8fd-f316-436f-a1ef-e581bd3f494a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f8f8556cf-9rc8j_calico-system(ed04d8fd-f316-436f-a1ef-e581bd3f494a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"873b2bbf73db62f4a9d63caacb03fd8c18dbfa697dcc8babb245194850ca0fe2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:58:49.446950 containerd[1993]: time="2025-11-23T22:58:49.446867300Z" level=error msg="Failed to destroy network for sandbox \"f163dd0df62c78b96651f686f679af82ecffb08d8119954f0159c62a06298178\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.449509 containerd[1993]: time="2025-11-23T22:58:49.449439224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lrpdp,Uid:8dcbc37e-9145-4538-9ae4-0ee44fb84086,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f163dd0df62c78b96651f686f679af82ecffb08d8119954f0159c62a06298178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.450681 kubelet[3551]: E1123 22:58:49.449791 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f163dd0df62c78b96651f686f679af82ecffb08d8119954f0159c62a06298178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.450681 kubelet[3551]: E1123 22:58:49.449865 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f163dd0df62c78b96651f686f679af82ecffb08d8119954f0159c62a06298178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" Nov 23 22:58:49.450681 kubelet[3551]: E1123 22:58:49.449899 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f163dd0df62c78b96651f686f679af82ecffb08d8119954f0159c62a06298178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" Nov 23 22:58:49.450938 kubelet[3551]: E1123 22:58:49.449998 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-786df89cbb-lrpdp_calico-apiserver(8dcbc37e-9145-4538-9ae4-0ee44fb84086)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-786df89cbb-lrpdp_calico-apiserver(8dcbc37e-9145-4538-9ae4-0ee44fb84086)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f163dd0df62c78b96651f686f679af82ecffb08d8119954f0159c62a06298178\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:58:49.867757 systemd[1]: Created slice kubepods-besteffort-podcaf53fdf_fed6_43b9_8878_f61f79709f6c.slice - libcontainer container kubepods-besteffort-podcaf53fdf_fed6_43b9_8878_f61f79709f6c.slice. Nov 23 22:58:49.877444 containerd[1993]: time="2025-11-23T22:58:49.877362358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssk7t,Uid:caf53fdf-fed6-43b9-8878-f61f79709f6c,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:49.914358 systemd[1]: run-netns-cni\x2d4f13341b\x2d793b\x2d8e44\x2d260f\x2d155a2d923472.mount: Deactivated successfully. Nov 23 22:58:49.915063 systemd[1]: run-netns-cni\x2dc528b51b\x2d55e0\x2d004e\x2d6e6b\x2d5c39d1be877e.mount: Deactivated successfully. Nov 23 22:58:49.915310 systemd[1]: run-netns-cni\x2dfca52373\x2d6756\x2dbb90\x2dfd01\x2dea8bdb34e689.mount: Deactivated successfully. Nov 23 22:58:49.915548 systemd[1]: run-netns-cni\x2de3b3f6f7\x2d9bca\x2dd389\x2d87d4\x2df559076a73a7.mount: Deactivated successfully. Nov 23 22:58:49.915849 systemd[1]: run-netns-cni\x2d1a4903c6\x2d55b9\x2d991b\x2d4c2c\x2dfa6d30e1d734.mount: Deactivated successfully. Nov 23 22:58:49.916101 systemd[1]: run-netns-cni\x2dff835127\x2d230a\x2d476c\x2d9757\x2d3f7633fa3b82.mount: Deactivated successfully. Nov 23 22:58:49.992935 containerd[1993]: time="2025-11-23T22:58:49.992853947Z" level=error msg="Failed to destroy network for sandbox \"e5721c2a3216eafe875a5b761461ced292d75034e33897da1d257cab9e3b857c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:49.997451 systemd[1]: run-netns-cni\x2d6f7f5bf8\x2df014\x2d635f\x2d12b8\x2d1c109f893c53.mount: Deactivated successfully. Nov 23 22:58:50.008020 containerd[1993]: time="2025-11-23T22:58:50.007933903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssk7t,Uid:caf53fdf-fed6-43b9-8878-f61f79709f6c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5721c2a3216eafe875a5b761461ced292d75034e33897da1d257cab9e3b857c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:50.008625 kubelet[3551]: E1123 22:58:50.008488 3551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5721c2a3216eafe875a5b761461ced292d75034e33897da1d257cab9e3b857c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:50.008625 kubelet[3551]: E1123 22:58:50.008562 3551 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5721c2a3216eafe875a5b761461ced292d75034e33897da1d257cab9e3b857c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssk7t" Nov 23 22:58:50.009670 kubelet[3551]: E1123 22:58:50.009185 3551 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5721c2a3216eafe875a5b761461ced292d75034e33897da1d257cab9e3b857c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ssk7t" Nov 23 22:58:50.009670 kubelet[3551]: E1123 22:58:50.009294 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5721c2a3216eafe875a5b761461ced292d75034e33897da1d257cab9e3b857c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:58:55.415724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799750323.mount: Deactivated successfully. Nov 23 22:58:55.470727 containerd[1993]: time="2025-11-23T22:58:55.470660474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:55.473262 containerd[1993]: time="2025-11-23T22:58:55.473191394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 22:58:55.475445 containerd[1993]: time="2025-11-23T22:58:55.475371686Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:55.481599 containerd[1993]: time="2025-11-23T22:58:55.481473986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:55.483297 containerd[1993]: time="2025-11-23T22:58:55.482514494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.315946771s" Nov 23 22:58:55.483297 containerd[1993]: time="2025-11-23T22:58:55.482573030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 22:58:55.535307 containerd[1993]: time="2025-11-23T22:58:55.535249490Z" level=info msg="CreateContainer within sandbox \"11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 22:58:55.555623 containerd[1993]: time="2025-11-23T22:58:55.554930823Z" level=info msg="Container b7ffc6901c35bcb991592479af37e067cf294ff9878d60943c0888a37740170c: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:55.580029 containerd[1993]: time="2025-11-23T22:58:55.579974463Z" level=info msg="CreateContainer within sandbox \"11b5e85bd27a5ab683cdbf69cbbe833b0a0a2f68758d094eb9e942e9f97e3dc6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7ffc6901c35bcb991592479af37e067cf294ff9878d60943c0888a37740170c\"" Nov 23 22:58:55.581492 containerd[1993]: time="2025-11-23T22:58:55.581433447Z" level=info msg="StartContainer for \"b7ffc6901c35bcb991592479af37e067cf294ff9878d60943c0888a37740170c\"" Nov 23 22:58:55.585084 containerd[1993]: time="2025-11-23T22:58:55.585008883Z" level=info msg="connecting to shim b7ffc6901c35bcb991592479af37e067cf294ff9878d60943c0888a37740170c" address="unix:///run/containerd/s/7f76e112ba8a8d5691c245a32b6dfd20ed71b5d35eb808792a47b220bc8a8440" protocol=ttrpc version=3 Nov 23 22:58:55.661927 systemd[1]: Started cri-containerd-b7ffc6901c35bcb991592479af37e067cf294ff9878d60943c0888a37740170c.scope - libcontainer container b7ffc6901c35bcb991592479af37e067cf294ff9878d60943c0888a37740170c. Nov 23 22:58:55.801479 containerd[1993]: time="2025-11-23T22:58:55.801413800Z" level=info msg="StartContainer for \"b7ffc6901c35bcb991592479af37e067cf294ff9878d60943c0888a37740170c\" returns successfully" Nov 23 22:58:56.071465 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 22:58:56.072209 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 22:58:56.279085 kubelet[3551]: I1123 22:58:56.278827 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tbn84" podStartSLOduration=1.95082947 podStartE2EDuration="17.278763974s" podCreationTimestamp="2025-11-23 22:58:39 +0000 UTC" firstStartedPulling="2025-11-23 22:58:40.156383794 +0000 UTC m=+33.624953700" lastFinishedPulling="2025-11-23 22:58:55.48431831 +0000 UTC m=+48.952888204" observedRunningTime="2025-11-23 22:58:56.27620477 +0000 UTC m=+49.744774724" watchObservedRunningTime="2025-11-23 22:58:56.278763974 +0000 UTC m=+49.747333892" Nov 23 22:58:56.513289 kubelet[3551]: I1123 22:58:56.511891 3551 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-ca-bundle\") pod \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\" (UID: \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\") " Nov 23 22:58:56.514115 kubelet[3551]: I1123 22:58:56.513716 3551 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-backend-key-pair\") pod \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\" (UID: \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\") " Nov 23 22:58:56.514311 kubelet[3551]: I1123 22:58:56.514285 3551 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6927t\" (UniqueName: \"kubernetes.io/projected/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-kube-api-access-6927t\") pod \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\" (UID: \"7e5adfe5-9d16-48ee-9a26-9bc3918748b8\") " Nov 23 22:58:56.523450 kubelet[3551]: I1123 22:58:56.513240 3551 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7e5adfe5-9d16-48ee-9a26-9bc3918748b8" (UID: "7e5adfe5-9d16-48ee-9a26-9bc3918748b8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 22:58:56.523059 systemd[1]: var-lib-kubelet-pods-7e5adfe5\x2d9d16\x2d48ee\x2d9a26\x2d9bc3918748b8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 22:58:56.526734 kubelet[3551]: I1123 22:58:56.525079 3551 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7e5adfe5-9d16-48ee-9a26-9bc3918748b8" (UID: "7e5adfe5-9d16-48ee-9a26-9bc3918748b8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 22:58:56.533910 kubelet[3551]: I1123 22:58:56.533732 3551 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-kube-api-access-6927t" (OuterVolumeSpecName: "kube-api-access-6927t") pod "7e5adfe5-9d16-48ee-9a26-9bc3918748b8" (UID: "7e5adfe5-9d16-48ee-9a26-9bc3918748b8"). InnerVolumeSpecName "kube-api-access-6927t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 22:58:56.537503 systemd[1]: var-lib-kubelet-pods-7e5adfe5\x2d9d16\x2d48ee\x2d9a26\x2d9bc3918748b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6927t.mount: Deactivated successfully. Nov 23 22:58:56.615608 kubelet[3551]: I1123 22:58:56.615529 3551 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-ca-bundle\") on node \"ip-172-31-24-18\" DevicePath \"\"" Nov 23 22:58:56.615742 kubelet[3551]: I1123 22:58:56.615628 3551 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-whisker-backend-key-pair\") on node \"ip-172-31-24-18\" DevicePath \"\"" Nov 23 22:58:56.615742 kubelet[3551]: I1123 22:58:56.615656 3551 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6927t\" (UniqueName: \"kubernetes.io/projected/7e5adfe5-9d16-48ee-9a26-9bc3918748b8-kube-api-access-6927t\") on node \"ip-172-31-24-18\" DevicePath \"\"" Nov 23 22:58:56.883269 systemd[1]: Removed slice kubepods-besteffort-pod7e5adfe5_9d16_48ee_9a26_9bc3918748b8.slice - libcontainer container kubepods-besteffort-pod7e5adfe5_9d16_48ee_9a26_9bc3918748b8.slice. Nov 23 22:58:57.444504 systemd[1]: Created slice kubepods-besteffort-pod73ce4778_08c3_48f4_84c6_854c5b7e542f.slice - libcontainer container kubepods-besteffort-pod73ce4778_08c3_48f4_84c6_854c5b7e542f.slice. Nov 23 22:58:57.527297 kubelet[3551]: I1123 22:58:57.527193 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73ce4778-08c3-48f4-84c6-854c5b7e542f-whisker-backend-key-pair\") pod \"whisker-dd89954d4-v45np\" (UID: \"73ce4778-08c3-48f4-84c6-854c5b7e542f\") " pod="calico-system/whisker-dd89954d4-v45np" Nov 23 22:58:57.528987 kubelet[3551]: I1123 22:58:57.528764 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wx8\" (UniqueName: \"kubernetes.io/projected/73ce4778-08c3-48f4-84c6-854c5b7e542f-kube-api-access-55wx8\") pod \"whisker-dd89954d4-v45np\" (UID: \"73ce4778-08c3-48f4-84c6-854c5b7e542f\") " pod="calico-system/whisker-dd89954d4-v45np" Nov 23 22:58:57.528987 kubelet[3551]: I1123 22:58:57.528849 3551 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73ce4778-08c3-48f4-84c6-854c5b7e542f-whisker-ca-bundle\") pod \"whisker-dd89954d4-v45np\" (UID: \"73ce4778-08c3-48f4-84c6-854c5b7e542f\") " pod="calico-system/whisker-dd89954d4-v45np" Nov 23 22:58:57.763878 containerd[1993]: time="2025-11-23T22:58:57.763214610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd89954d4-v45np,Uid:73ce4778-08c3-48f4-84c6-854c5b7e542f,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:58.216876 (udev-worker)[4584]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:58:58.221268 systemd-networkd[1849]: cali5e8d1d7b3a4: Link UP Nov 23 22:58:58.230133 systemd-networkd[1849]: cali5e8d1d7b3a4: Gained carrier Nov 23 22:58:58.293550 containerd[1993]: 2025-11-23 22:58:57.861 [INFO][4756] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:58:58.293550 containerd[1993]: 2025-11-23 22:58:57.948 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0 whisker-dd89954d4- calico-system 73ce4778-08c3-48f4-84c6-854c5b7e542f 926 0 2025-11-23 22:58:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dd89954d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-18 whisker-dd89954d4-v45np eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5e8d1d7b3a4 [] [] }} ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-" Nov 23 22:58:58.293550 containerd[1993]: 2025-11-23 22:58:57.948 [INFO][4756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" Nov 23 22:58:58.293550 containerd[1993]: 2025-11-23 22:58:58.091 [INFO][4781] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" HandleID="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Workload="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.093 [INFO][4781] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" HandleID="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Workload="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330300), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-18", "pod":"whisker-dd89954d4-v45np", "timestamp":"2025-11-23 22:58:58.091330431 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.093 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.093 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.093 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.112 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" host="ip-172-31-24-18" Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.122 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.139 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.143 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.149 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:58:58.294000 containerd[1993]: 2025-11-23 22:58:58.149 [INFO][4781] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" host="ip-172-31-24-18" Nov 23 22:58:58.295508 containerd[1993]: 2025-11-23 22:58:58.152 [INFO][4781] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4 Nov 23 22:58:58.295508 containerd[1993]: 2025-11-23 22:58:58.161 [INFO][4781] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" host="ip-172-31-24-18" Nov 23 22:58:58.295508 containerd[1993]: 2025-11-23 22:58:58.175 [INFO][4781] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.129/26] block=192.168.42.128/26 handle="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" host="ip-172-31-24-18" Nov 23 22:58:58.295508 containerd[1993]: 2025-11-23 22:58:58.175 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.129/26] handle="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" host="ip-172-31-24-18" Nov 23 22:58:58.295508 containerd[1993]: 2025-11-23 22:58:58.176 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:58.295508 containerd[1993]: 2025-11-23 22:58:58.176 [INFO][4781] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.129/26] IPv6=[] ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" HandleID="k8s-pod-network.dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Workload="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" Nov 23 22:58:58.295890 containerd[1993]: 2025-11-23 22:58:58.194 [INFO][4756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0", GenerateName:"whisker-dd89954d4-", Namespace:"calico-system", SelfLink:"", UID:"73ce4778-08c3-48f4-84c6-854c5b7e542f", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dd89954d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"whisker-dd89954d4-v45np", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e8d1d7b3a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:58.295890 containerd[1993]: 2025-11-23 22:58:58.194 [INFO][4756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.129/32] ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" Nov 23 22:58:58.296077 containerd[1993]: 2025-11-23 22:58:58.194 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e8d1d7b3a4 ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" Nov 23 22:58:58.296077 containerd[1993]: 2025-11-23 22:58:58.232 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" Nov 23 22:58:58.296261 containerd[1993]: 2025-11-23 22:58:58.234 [INFO][4756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0", GenerateName:"whisker-dd89954d4-", Namespace:"calico-system", SelfLink:"", UID:"73ce4778-08c3-48f4-84c6-854c5b7e542f", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dd89954d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4", Pod:"whisker-dd89954d4-v45np", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5e8d1d7b3a4", MAC:"f2:6f:83:79:dc:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:58.296376 containerd[1993]: 2025-11-23 22:58:58.284 [INFO][4756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" Namespace="calico-system" Pod="whisker-dd89954d4-v45np" WorkloadEndpoint="ip--172--31--24--18-k8s-whisker--dd89954d4--v45np-eth0" Nov 23 22:58:58.425957 containerd[1993]: time="2025-11-23T22:58:58.424180217Z" level=info msg="connecting to shim dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4" address="unix:///run/containerd/s/f5c6a6ceb33cc03f2b91e32c00a528c6c74f46ffed2fa555c29e5c4704acecdb" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:58.499550 systemd[1]: Started cri-containerd-dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4.scope - libcontainer container dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4. Nov 23 22:58:58.635397 containerd[1993]: time="2025-11-23T22:58:58.635257014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd89954d4-v45np,Uid:73ce4778-08c3-48f4-84c6-854c5b7e542f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc6205964416c626c2d2ef9533f4aba4c8462afec1da8490c05c7e27bbb388b4\"" Nov 23 22:58:58.640235 containerd[1993]: time="2025-11-23T22:58:58.639852834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:58:58.871041 kubelet[3551]: I1123 22:58:58.870974 3551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5adfe5-9d16-48ee-9a26-9bc3918748b8" path="/var/lib/kubelet/pods/7e5adfe5-9d16-48ee-9a26-9bc3918748b8/volumes" Nov 23 22:58:58.911419 (udev-worker)[4585]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:58:58.912152 containerd[1993]: time="2025-11-23T22:58:58.911906419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:58.915965 containerd[1993]: time="2025-11-23T22:58:58.914081935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:58:58.915965 containerd[1993]: time="2025-11-23T22:58:58.914250211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:58:58.917741 kubelet[3551]: E1123 22:58:58.914462 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:58:58.917741 kubelet[3551]: E1123 22:58:58.914530 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:58:58.917741 kubelet[3551]: E1123 22:58:58.914983 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:58.922299 containerd[1993]: time="2025-11-23T22:58:58.922068955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:58:58.942829 systemd-networkd[1849]: vxlan.calico: Link UP Nov 23 22:58:58.943039 systemd-networkd[1849]: vxlan.calico: Gained carrier Nov 23 22:58:59.205344 containerd[1993]: time="2025-11-23T22:58:59.205182017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:59.208195 containerd[1993]: time="2025-11-23T22:58:59.208077845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:58:59.208654 containerd[1993]: time="2025-11-23T22:58:59.208116977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:58:59.208911 kubelet[3551]: E1123 22:58:59.208466 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:58:59.209376 kubelet[3551]: E1123 22:58:59.209223 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:58:59.210609 kubelet[3551]: E1123 22:58:59.210043 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:59.211940 kubelet[3551]: E1123 22:58:59.211757 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 22:58:59.262917 kubelet[3551]: E1123 22:58:59.262638 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 22:58:59.393841 systemd-networkd[1849]: cali5e8d1d7b3a4: Gained IPv6LL Nov 23 22:58:59.862861 containerd[1993]: time="2025-11-23T22:58:59.862746176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s6g9f,Uid:e1f14fe2-00f7-46fe-a466-aade4137bcc9,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:59.866571 containerd[1993]: time="2025-11-23T22:58:59.866414984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cfc6487-7qv5g,Uid:1ce6cd5f-eadc-464f-af0e-bacaebe7e59a,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:59:00.218857 (udev-worker)[4891]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:59:00.221515 systemd-networkd[1849]: cali9e34eaa28db: Link UP Nov 23 22:59:00.225692 systemd-networkd[1849]: cali9e34eaa28db: Gained carrier Nov 23 22:59:00.259538 kubelet[3551]: E1123 22:59:00.257391 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 22:59:00.272573 containerd[1993]: 2025-11-23 22:59:00.024 [INFO][4930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0 coredns-66bc5c9577- kube-system e1f14fe2-00f7-46fe-a466-aade4137bcc9 853 0 2025-11-23 22:58:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-18 coredns-66bc5c9577-s6g9f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e34eaa28db [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-" Nov 23 22:59:00.272573 containerd[1993]: 2025-11-23 22:59:00.024 [INFO][4930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" Nov 23 22:59:00.272573 containerd[1993]: 2025-11-23 22:59:00.117 [INFO][4959] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" HandleID="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Workload="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.118 [INFO][4959] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" HandleID="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Workload="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400010fe50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-18", "pod":"coredns-66bc5c9577-s6g9f", "timestamp":"2025-11-23 22:59:00.117914561 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.118 [INFO][4959] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.118 [INFO][4959] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.119 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.141 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" host="ip-172-31-24-18" Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.148 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.159 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.162 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.168 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:00.273398 containerd[1993]: 2025-11-23 22:59:00.168 [INFO][4959] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" host="ip-172-31-24-18" Nov 23 22:59:00.276709 containerd[1993]: 2025-11-23 22:59:00.171 [INFO][4959] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7 Nov 23 22:59:00.276709 containerd[1993]: 2025-11-23 22:59:00.194 [INFO][4959] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" host="ip-172-31-24-18" Nov 23 22:59:00.276709 containerd[1993]: 2025-11-23 22:59:00.210 [INFO][4959] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.130/26] block=192.168.42.128/26 handle="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" host="ip-172-31-24-18" Nov 23 22:59:00.276709 containerd[1993]: 2025-11-23 22:59:00.210 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.130/26] handle="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" host="ip-172-31-24-18" Nov 23 22:59:00.276709 containerd[1993]: 2025-11-23 22:59:00.210 [INFO][4959] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:00.276709 containerd[1993]: 2025-11-23 22:59:00.210 [INFO][4959] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.130/26] IPv6=[] ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" HandleID="k8s-pod-network.b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Workload="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" Nov 23 22:59:00.277752 containerd[1993]: 2025-11-23 22:59:00.214 [INFO][4930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e1f14fe2-00f7-46fe-a466-aade4137bcc9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"coredns-66bc5c9577-s6g9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e34eaa28db", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:00.277752 containerd[1993]: 2025-11-23 22:59:00.215 [INFO][4930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.130/32] ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" Nov 23 22:59:00.277752 containerd[1993]: 2025-11-23 22:59:00.215 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e34eaa28db ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" Nov 23 22:59:00.277752 containerd[1993]: 2025-11-23 22:59:00.225 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" Nov 23 22:59:00.277752 containerd[1993]: 2025-11-23 22:59:00.228 [INFO][4930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e1f14fe2-00f7-46fe-a466-aade4137bcc9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7", Pod:"coredns-66bc5c9577-s6g9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e34eaa28db", MAC:"66:6d:2e:bb:ef:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:00.277752 containerd[1993]: 2025-11-23 22:59:00.264 [INFO][4930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" Namespace="kube-system" Pod="coredns-66bc5c9577-s6g9f" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--s6g9f-eth0" Nov 23 22:59:00.372378 containerd[1993]: time="2025-11-23T22:59:00.372154146Z" level=info msg="connecting to shim b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7" address="unix:///run/containerd/s/2cecfcf4a1460e7e9276168cc3abcb9b3be14cb95ca6d56fed16c359a9a1d0dc" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:00.437757 systemd-networkd[1849]: cali15b4d8df00c: Link UP Nov 23 22:59:00.457491 systemd-networkd[1849]: cali15b4d8df00c: Gained carrier Nov 23 22:59:00.487577 systemd[1]: Started cri-containerd-b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7.scope - libcontainer container b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7. Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.001 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0 calico-apiserver-855cfc6487- calico-apiserver 1ce6cd5f-eadc-464f-af0e-bacaebe7e59a 860 0 2025-11-23 22:58:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855cfc6487 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-18 calico-apiserver-855cfc6487-7qv5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali15b4d8df00c [] [] }} ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.002 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.129 [INFO][4954] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" HandleID="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Workload="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.130 [INFO][4954] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" HandleID="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Workload="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030f790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-18", "pod":"calico-apiserver-855cfc6487-7qv5g", "timestamp":"2025-11-23 22:59:00.129188129 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.131 [INFO][4954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.210 [INFO][4954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.211 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.265 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.303 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.343 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.352 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.364 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.365 [INFO][4954] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.370 [INFO][4954] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.384 [INFO][4954] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.405 [INFO][4954] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.131/26] block=192.168.42.128/26 handle="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.411 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.131/26] handle="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" host="ip-172-31-24-18" Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.411 [INFO][4954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:00.512785 containerd[1993]: 2025-11-23 22:59:00.411 [INFO][4954] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.131/26] IPv6=[] ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" HandleID="k8s-pod-network.1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Workload="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" Nov 23 22:59:00.514211 containerd[1993]: 2025-11-23 22:59:00.423 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0", GenerateName:"calico-apiserver-855cfc6487-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ce6cd5f-eadc-464f-af0e-bacaebe7e59a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855cfc6487", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"calico-apiserver-855cfc6487-7qv5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15b4d8df00c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:00.514211 containerd[1993]: 2025-11-23 22:59:00.424 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.131/32] ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" Nov 23 22:59:00.514211 containerd[1993]: 2025-11-23 22:59:00.424 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15b4d8df00c ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" Nov 23 22:59:00.514211 containerd[1993]: 2025-11-23 22:59:00.463 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" Nov 23 22:59:00.514211 containerd[1993]: 2025-11-23 22:59:00.470 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0", GenerateName:"calico-apiserver-855cfc6487-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ce6cd5f-eadc-464f-af0e-bacaebe7e59a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855cfc6487", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f", Pod:"calico-apiserver-855cfc6487-7qv5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15b4d8df00c", MAC:"56:f5:99:3a:b8:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:00.514211 containerd[1993]: 2025-11-23 22:59:00.502 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" Namespace="calico-apiserver" Pod="calico-apiserver-855cfc6487-7qv5g" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--855cfc6487--7qv5g-eth0" Nov 23 22:59:00.586935 containerd[1993]: time="2025-11-23T22:59:00.586862516Z" level=info msg="connecting to shim 1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f" address="unix:///run/containerd/s/3e7c6c039fc124249b2d6a5a48df090b5bafce435006a015b813bb65fcba3ab1" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:00.644721 containerd[1993]: time="2025-11-23T22:59:00.644562020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-s6g9f,Uid:e1f14fe2-00f7-46fe-a466-aade4137bcc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7\"" Nov 23 22:59:00.665967 containerd[1993]: time="2025-11-23T22:59:00.665915240Z" level=info msg="CreateContainer within sandbox \"b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:59:00.674348 systemd-networkd[1849]: vxlan.calico: Gained IPv6LL Nov 23 22:59:00.708093 systemd[1]: Started cri-containerd-1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f.scope - libcontainer container 1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f. Nov 23 22:59:00.720569 containerd[1993]: time="2025-11-23T22:59:00.720508880Z" level=info msg="Container 9bee2cfca152e991a4118f9472cfe521747ae3cfd56a14aebb2450ce435543d3: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:00.735073 containerd[1993]: time="2025-11-23T22:59:00.734996456Z" level=info msg="CreateContainer within sandbox \"b3cef8595d04b62dfb7878cae5261c7c13b539987e7c31d5cdb5608703b1add7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bee2cfca152e991a4118f9472cfe521747ae3cfd56a14aebb2450ce435543d3\"" Nov 23 22:59:00.739333 containerd[1993]: time="2025-11-23T22:59:00.739161692Z" level=info msg="StartContainer for \"9bee2cfca152e991a4118f9472cfe521747ae3cfd56a14aebb2450ce435543d3\"" Nov 23 22:59:00.743477 containerd[1993]: time="2025-11-23T22:59:00.743403404Z" level=info msg="connecting to shim 9bee2cfca152e991a4118f9472cfe521747ae3cfd56a14aebb2450ce435543d3" address="unix:///run/containerd/s/2cecfcf4a1460e7e9276168cc3abcb9b3be14cb95ca6d56fed16c359a9a1d0dc" protocol=ttrpc version=3 Nov 23 22:59:00.806031 systemd[1]: Started cri-containerd-9bee2cfca152e991a4118f9472cfe521747ae3cfd56a14aebb2450ce435543d3.scope - libcontainer container 9bee2cfca152e991a4118f9472cfe521747ae3cfd56a14aebb2450ce435543d3. Nov 23 22:59:00.865674 containerd[1993]: time="2025-11-23T22:59:00.865384221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8f8556cf-9rc8j,Uid:ed04d8fd-f316-436f-a1ef-e581bd3f494a,Namespace:calico-system,Attempt:0,}" Nov 23 22:59:00.977474 containerd[1993]: time="2025-11-23T22:59:00.977400837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cfc6487-7qv5g,Uid:1ce6cd5f-eadc-464f-af0e-bacaebe7e59a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1037fa64d732bb971f3c4d1031add48b620b1aa796a378bc1f888c9c0556555f\"" Nov 23 22:59:00.986045 containerd[1993]: time="2025-11-23T22:59:00.985969510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:00.991688 containerd[1993]: time="2025-11-23T22:59:00.990902998Z" level=info msg="StartContainer for \"9bee2cfca152e991a4118f9472cfe521747ae3cfd56a14aebb2450ce435543d3\" returns successfully" Nov 23 22:59:01.186666 systemd-networkd[1849]: cali1637b27026e: Link UP Nov 23 22:59:01.189356 systemd-networkd[1849]: cali1637b27026e: Gained carrier Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.038 [INFO][5098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0 calico-kube-controllers-7f8f8556cf- calico-system ed04d8fd-f316-436f-a1ef-e581bd3f494a 854 0 2025-11-23 22:58:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f8f8556cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-18 calico-kube-controllers-7f8f8556cf-9rc8j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1637b27026e [] [] }} ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.039 [INFO][5098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.102 [INFO][5118] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" HandleID="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Workload="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.102 [INFO][5118] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" HandleID="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Workload="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-18", "pod":"calico-kube-controllers-7f8f8556cf-9rc8j", "timestamp":"2025-11-23 22:59:01.10222947 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.102 [INFO][5118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.102 [INFO][5118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.102 [INFO][5118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.117 [INFO][5118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.125 [INFO][5118] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.137 [INFO][5118] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.141 [INFO][5118] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.147 [INFO][5118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.148 [INFO][5118] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.151 [INFO][5118] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404 Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.162 [INFO][5118] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.175 [INFO][5118] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.132/26] block=192.168.42.128/26 handle="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.175 [INFO][5118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.132/26] handle="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" host="ip-172-31-24-18" Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.175 [INFO][5118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:01.223504 containerd[1993]: 2025-11-23 22:59:01.175 [INFO][5118] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.132/26] IPv6=[] ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" HandleID="k8s-pod-network.b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Workload="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" Nov 23 22:59:01.226207 containerd[1993]: 2025-11-23 22:59:01.180 [INFO][5098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0", GenerateName:"calico-kube-controllers-7f8f8556cf-", Namespace:"calico-system", SelfLink:"", UID:"ed04d8fd-f316-436f-a1ef-e581bd3f494a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8f8556cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"calico-kube-controllers-7f8f8556cf-9rc8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1637b27026e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:01.226207 containerd[1993]: 2025-11-23 22:59:01.180 [INFO][5098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.132/32] ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" Nov 23 22:59:01.226207 containerd[1993]: 2025-11-23 22:59:01.180 [INFO][5098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1637b27026e ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" Nov 23 22:59:01.226207 containerd[1993]: 2025-11-23 22:59:01.191 [INFO][5098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" Nov 23 22:59:01.226207 containerd[1993]: 2025-11-23 22:59:01.192 [INFO][5098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0", GenerateName:"calico-kube-controllers-7f8f8556cf-", Namespace:"calico-system", SelfLink:"", UID:"ed04d8fd-f316-436f-a1ef-e581bd3f494a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f8f8556cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404", Pod:"calico-kube-controllers-7f8f8556cf-9rc8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1637b27026e", MAC:"ba:d3:50:aa:28:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:01.226207 containerd[1993]: 2025-11-23 22:59:01.218 [INFO][5098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" Namespace="calico-system" Pod="calico-kube-controllers-7f8f8556cf-9rc8j" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--kube--controllers--7f8f8556cf--9rc8j-eth0" Nov 23 22:59:01.267945 containerd[1993]: time="2025-11-23T22:59:01.267789187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:01.274541 containerd[1993]: time="2025-11-23T22:59:01.273708739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:01.274541 containerd[1993]: time="2025-11-23T22:59:01.274103839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:01.276690 kubelet[3551]: E1123 22:59:01.274702 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:01.276690 kubelet[3551]: E1123 22:59:01.276487 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:01.279577 kubelet[3551]: E1123 22:59:01.276830 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-855cfc6487-7qv5g_calico-apiserver(1ce6cd5f-eadc-464f-af0e-bacaebe7e59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:01.279577 kubelet[3551]: E1123 22:59:01.277307 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:59:01.306019 kubelet[3551]: E1123 22:59:01.305892 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:59:01.310021 containerd[1993]: time="2025-11-23T22:59:01.309937447Z" level=info msg="connecting to shim b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404" address="unix:///run/containerd/s/54f5778abde081854deed1c426beb641408af0224475cf80384adf8f8c819d17" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:01.354343 kubelet[3551]: I1123 22:59:01.354251 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s6g9f" podStartSLOduration=50.354227623 podStartE2EDuration="50.354227623s" podCreationTimestamp="2025-11-23 22:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:59:01.348015823 +0000 UTC m=+54.816585753" watchObservedRunningTime="2025-11-23 22:59:01.354227623 +0000 UTC m=+54.822797517" Nov 23 22:59:01.562097 systemd[1]: Started cri-containerd-b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404.scope - libcontainer container b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404. Nov 23 22:59:01.728674 containerd[1993]: time="2025-11-23T22:59:01.728563413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f8f8556cf-9rc8j,Uid:ed04d8fd-f316-436f-a1ef-e581bd3f494a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b47e8984150ca1773653a98b4b2e3bd8e5972fafc191a9b8b70a67556d5b6404\"" Nov 23 22:59:01.731240 containerd[1993]: time="2025-11-23T22:59:01.731196501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:59:01.826935 systemd-networkd[1849]: cali15b4d8df00c: Gained IPv6LL Nov 23 22:59:01.864398 containerd[1993]: time="2025-11-23T22:59:01.864324886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssk7t,Uid:caf53fdf-fed6-43b9-8878-f61f79709f6c,Namespace:calico-system,Attempt:0,}" Nov 23 22:59:02.008315 containerd[1993]: time="2025-11-23T22:59:02.008254303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:02.011176 containerd[1993]: time="2025-11-23T22:59:02.011099803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:59:02.011353 containerd[1993]: time="2025-11-23T22:59:02.011239339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:02.011981 kubelet[3551]: E1123 22:59:02.011919 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:02.012148 kubelet[3551]: E1123 22:59:02.011994 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:02.012575 kubelet[3551]: E1123 22:59:02.012527 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f8f8556cf-9rc8j_calico-system(ed04d8fd-f316-436f-a1ef-e581bd3f494a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:02.015343 kubelet[3551]: E1123 22:59:02.014292 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:59:02.145909 systemd-networkd[1849]: cali9e34eaa28db: Gained IPv6LL Nov 23 22:59:02.173558 systemd-networkd[1849]: cali4dac36a6bad: Link UP Nov 23 22:59:02.175275 systemd-networkd[1849]: cali4dac36a6bad: Gained carrier Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.000 [INFO][5187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0 csi-node-driver- calico-system caf53fdf-fed6-43b9-8878-f61f79709f6c 744 0 2025-11-23 22:58:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-18 csi-node-driver-ssk7t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4dac36a6bad [] [] }} ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.001 [INFO][5187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.072 [INFO][5198] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" HandleID="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Workload="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.072 [INFO][5198] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" HandleID="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Workload="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-18", "pod":"csi-node-driver-ssk7t", "timestamp":"2025-11-23 22:59:02.072392947 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.073 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.073 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.073 [INFO][5198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.095 [INFO][5198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.104 [INFO][5198] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.113 [INFO][5198] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.116 [INFO][5198] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.121 [INFO][5198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.122 [INFO][5198] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.125 [INFO][5198] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.139 [INFO][5198] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.160 [INFO][5198] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.133/26] block=192.168.42.128/26 handle="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.160 [INFO][5198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.133/26] handle="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" host="ip-172-31-24-18" Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.160 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:02.205848 containerd[1993]: 2025-11-23 22:59:02.160 [INFO][5198] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.133/26] IPv6=[] ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" HandleID="k8s-pod-network.e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Workload="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" Nov 23 22:59:02.208702 containerd[1993]: 2025-11-23 22:59:02.163 [INFO][5187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"caf53fdf-fed6-43b9-8878-f61f79709f6c", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"csi-node-driver-ssk7t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4dac36a6bad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:02.208702 containerd[1993]: 2025-11-23 22:59:02.164 [INFO][5187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.133/32] ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" Nov 23 22:59:02.208702 containerd[1993]: 2025-11-23 22:59:02.164 [INFO][5187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dac36a6bad ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" Nov 23 22:59:02.208702 containerd[1993]: 2025-11-23 22:59:02.177 [INFO][5187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" Nov 23 22:59:02.208702 containerd[1993]: 2025-11-23 22:59:02.181 [INFO][5187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"caf53fdf-fed6-43b9-8878-f61f79709f6c", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e", Pod:"csi-node-driver-ssk7t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4dac36a6bad", MAC:"6a:2f:ac:ba:97:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:02.208702 containerd[1993]: 2025-11-23 22:59:02.199 [INFO][5187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" Namespace="calico-system" Pod="csi-node-driver-ssk7t" WorkloadEndpoint="ip--172--31--24--18-k8s-csi--node--driver--ssk7t-eth0" Nov 23 22:59:02.262038 containerd[1993]: time="2025-11-23T22:59:02.261976124Z" level=info msg="connecting to shim e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e" address="unix:///run/containerd/s/12330c2a7a3c189c9792c8ab42b87cc60802099fc5ffd7e08a02197d9e4ac89b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:02.316235 kubelet[3551]: E1123 22:59:02.316138 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:59:02.318220 kubelet[3551]: E1123 22:59:02.318020 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:59:02.319945 systemd[1]: Started cri-containerd-e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e.scope - libcontainer container e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e. Nov 23 22:59:02.420864 containerd[1993]: time="2025-11-23T22:59:02.420198189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ssk7t,Uid:caf53fdf-fed6-43b9-8878-f61f79709f6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e12c30cc36ee0763966f1184ba38afa1ab87e45c67c8f64ee77a5583f8d7b93e\"" Nov 23 22:59:02.429948 containerd[1993]: time="2025-11-23T22:59:02.429558381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:59:02.718335 containerd[1993]: time="2025-11-23T22:59:02.718076086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:02.720779 containerd[1993]: time="2025-11-23T22:59:02.720618538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:59:02.720779 containerd[1993]: time="2025-11-23T22:59:02.720634426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:59:02.721058 kubelet[3551]: E1123 22:59:02.720991 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:02.721165 kubelet[3551]: E1123 22:59:02.721061 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:02.721317 kubelet[3551]: E1123 22:59:02.721166 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:02.722073 systemd-networkd[1849]: cali1637b27026e: Gained IPv6LL Nov 23 22:59:02.727706 containerd[1993]: time="2025-11-23T22:59:02.727561966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:59:02.865762 containerd[1993]: time="2025-11-23T22:59:02.865683131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hk4l,Uid:f5f68273-555b-4e0e-98f1-1cec4181626f,Namespace:kube-system,Attempt:0,}" Nov 23 22:59:02.869482 containerd[1993]: time="2025-11-23T22:59:02.869208923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8jgv8,Uid:bb092924-5640-4734-8a43-16aa063b77ae,Namespace:calico-system,Attempt:0,}" Nov 23 22:59:02.875473 containerd[1993]: time="2025-11-23T22:59:02.875411699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lh757,Uid:e89989bc-946c-40b9-a2fe-b6be9daeb141,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:59:03.001834 containerd[1993]: time="2025-11-23T22:59:03.001620776Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:03.004883 containerd[1993]: time="2025-11-23T22:59:03.004794080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:59:03.010779 containerd[1993]: time="2025-11-23T22:59:03.004902284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:59:03.011063 kubelet[3551]: E1123 22:59:03.010995 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:03.011761 kubelet[3551]: E1123 22:59:03.011055 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:03.011761 kubelet[3551]: E1123 22:59:03.011196 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:03.011761 kubelet[3551]: E1123 22:59:03.011267 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:59:03.242290 systemd-networkd[1849]: cali7e703f6ff9b: Link UP Nov 23 22:59:03.248661 systemd-networkd[1849]: cali7e703f6ff9b: Gained carrier Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.040 [INFO][5261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0 coredns-66bc5c9577- kube-system f5f68273-555b-4e0e-98f1-1cec4181626f 852 0 2025-11-23 22:58:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-18 coredns-66bc5c9577-9hk4l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e703f6ff9b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.041 [INFO][5261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.134 [INFO][5302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" HandleID="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Workload="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.136 [INFO][5302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" HandleID="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Workload="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-18", "pod":"coredns-66bc5c9577-9hk4l", "timestamp":"2025-11-23 22:59:03.134036936 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.136 [INFO][5302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.136 [INFO][5302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.136 [INFO][5302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.164 [INFO][5302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.176 [INFO][5302] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.188 [INFO][5302] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.193 [INFO][5302] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.199 [INFO][5302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.200 [INFO][5302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.203 [INFO][5302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.212 [INFO][5302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.226 [INFO][5302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.134/26] block=192.168.42.128/26 handle="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.226 [INFO][5302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.134/26] handle="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" host="ip-172-31-24-18" Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.227 [INFO][5302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:03.290748 containerd[1993]: 2025-11-23 22:59:03.227 [INFO][5302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.134/26] IPv6=[] ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" HandleID="k8s-pod-network.0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Workload="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" Nov 23 22:59:03.293059 containerd[1993]: 2025-11-23 22:59:03.233 [INFO][5261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f5f68273-555b-4e0e-98f1-1cec4181626f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"coredns-66bc5c9577-9hk4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e703f6ff9b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:03.293059 containerd[1993]: 2025-11-23 22:59:03.233 [INFO][5261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.134/32] ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" Nov 23 22:59:03.293059 containerd[1993]: 2025-11-23 22:59:03.233 [INFO][5261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e703f6ff9b ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" Nov 23 22:59:03.293059 containerd[1993]: 2025-11-23 22:59:03.253 [INFO][5261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" Nov 23 22:59:03.293059 containerd[1993]: 2025-11-23 22:59:03.254 [INFO][5261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f5f68273-555b-4e0e-98f1-1cec4181626f", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f", Pod:"coredns-66bc5c9577-9hk4l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e703f6ff9b", MAC:"4a:4a:a3:9c:80:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:03.293059 containerd[1993]: 2025-11-23 22:59:03.287 [INFO][5261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" Namespace="kube-system" Pod="coredns-66bc5c9577-9hk4l" WorkloadEndpoint="ip--172--31--24--18-k8s-coredns--66bc5c9577--9hk4l-eth0" Nov 23 22:59:03.332689 kubelet[3551]: E1123 22:59:03.332473 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:59:03.337827 kubelet[3551]: E1123 22:59:03.335993 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:59:03.389166 containerd[1993]: time="2025-11-23T22:59:03.389072229Z" level=info msg="connecting to shim 0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f" address="unix:///run/containerd/s/4dd609134b2cba04d0b8de3de212d7f69552b217997decb3bd99ba2de6c10442" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:03.411873 systemd-networkd[1849]: cali72b0a38c41a: Link UP Nov 23 22:59:03.414752 systemd-networkd[1849]: cali72b0a38c41a: Gained carrier Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.079 [INFO][5282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0 calico-apiserver-786df89cbb- calico-apiserver e89989bc-946c-40b9-a2fe-b6be9daeb141 855 0 2025-11-23 22:58:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786df89cbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-18 calico-apiserver-786df89cbb-lh757 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72b0a38c41a [] [] }} ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.079 [INFO][5282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.173 [INFO][5307] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" HandleID="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Workload="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.173 [INFO][5307] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" HandleID="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Workload="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b0a10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-18", "pod":"calico-apiserver-786df89cbb-lh757", "timestamp":"2025-11-23 22:59:03.173060468 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.173 [INFO][5307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.227 [INFO][5307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.227 [INFO][5307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.264 [INFO][5307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.280 [INFO][5307] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.299 [INFO][5307] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.306 [INFO][5307] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.316 [INFO][5307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.317 [INFO][5307] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.324 [INFO][5307] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9 Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.336 [INFO][5307] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.375 [INFO][5307] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.135/26] block=192.168.42.128/26 handle="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.375 [INFO][5307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.135/26] handle="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" host="ip-172-31-24-18" Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.375 [INFO][5307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:03.470376 containerd[1993]: 2025-11-23 22:59:03.375 [INFO][5307] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.135/26] IPv6=[] ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" HandleID="k8s-pod-network.849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Workload="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" Nov 23 22:59:03.474697 containerd[1993]: 2025-11-23 22:59:03.395 [INFO][5282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0", GenerateName:"calico-apiserver-786df89cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e89989bc-946c-40b9-a2fe-b6be9daeb141", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786df89cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"calico-apiserver-786df89cbb-lh757", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72b0a38c41a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:03.474697 containerd[1993]: 2025-11-23 22:59:03.396 [INFO][5282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.135/32] ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" Nov 23 22:59:03.474697 containerd[1993]: 2025-11-23 22:59:03.396 [INFO][5282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72b0a38c41a ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" Nov 23 22:59:03.474697 containerd[1993]: 2025-11-23 22:59:03.417 [INFO][5282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" Nov 23 22:59:03.474697 containerd[1993]: 2025-11-23 22:59:03.424 [INFO][5282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0", GenerateName:"calico-apiserver-786df89cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"e89989bc-946c-40b9-a2fe-b6be9daeb141", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786df89cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9", Pod:"calico-apiserver-786df89cbb-lh757", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72b0a38c41a", MAC:"aa:34:21:88:6e:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:03.474697 containerd[1993]: 2025-11-23 22:59:03.458 [INFO][5282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lh757" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lh757-eth0" Nov 23 22:59:03.490042 systemd-networkd[1849]: cali4dac36a6bad: Gained IPv6LL Nov 23 22:59:03.544935 systemd[1]: Started cri-containerd-0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f.scope - libcontainer container 0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f. Nov 23 22:59:03.593152 containerd[1993]: time="2025-11-23T22:59:03.593081698Z" level=info msg="connecting to shim 849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9" address="unix:///run/containerd/s/b303c37134311a9b0d19d1474481e9f01a95da261e71e9cdbc1aee9717d56b0b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:03.635310 systemd-networkd[1849]: cali7ddd59b9b02: Link UP Nov 23 22:59:03.640230 systemd-networkd[1849]: cali7ddd59b9b02: Gained carrier Nov 23 22:59:03.716080 systemd[1]: Started cri-containerd-849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9.scope - libcontainer container 849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9. Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.091 [INFO][5263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0 goldmane-7c778bb748- calico-system bb092924-5640-4734-8a43-16aa063b77ae 858 0 2025-11-23 22:58:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-18 goldmane-7c778bb748-8jgv8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7ddd59b9b02 [] [] }} ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.092 [INFO][5263] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.205 [INFO][5312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" HandleID="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Workload="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.206 [INFO][5312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" HandleID="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Workload="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001036a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-18", "pod":"goldmane-7c778bb748-8jgv8", "timestamp":"2025-11-23 22:59:03.205570665 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.206 [INFO][5312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.376 [INFO][5312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.376 [INFO][5312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.466 [INFO][5312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.481 [INFO][5312] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.501 [INFO][5312] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.509 [INFO][5312] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.526 [INFO][5312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.530 [INFO][5312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.542 [INFO][5312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4 Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.569 [INFO][5312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.603 [INFO][5312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.136/26] block=192.168.42.128/26 handle="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.603 [INFO][5312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.136/26] handle="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" host="ip-172-31-24-18" Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.603 [INFO][5312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:03.721792 containerd[1993]: 2025-11-23 22:59:03.603 [INFO][5312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.136/26] IPv6=[] ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" HandleID="k8s-pod-network.80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Workload="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" Nov 23 22:59:03.722865 containerd[1993]: 2025-11-23 22:59:03.615 [INFO][5263] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"bb092924-5640-4734-8a43-16aa063b77ae", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"goldmane-7c778bb748-8jgv8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ddd59b9b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:03.722865 containerd[1993]: 2025-11-23 22:59:03.616 [INFO][5263] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.136/32] ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" Nov 23 22:59:03.722865 containerd[1993]: 2025-11-23 22:59:03.617 [INFO][5263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ddd59b9b02 ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" Nov 23 22:59:03.722865 containerd[1993]: 2025-11-23 22:59:03.646 [INFO][5263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" Nov 23 22:59:03.722865 containerd[1993]: 2025-11-23 22:59:03.653 [INFO][5263] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"bb092924-5640-4734-8a43-16aa063b77ae", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4", Pod:"goldmane-7c778bb748-8jgv8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7ddd59b9b02", MAC:"9e:11:46:98:a6:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:03.722865 containerd[1993]: 2025-11-23 22:59:03.709 [INFO][5263] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" Namespace="calico-system" Pod="goldmane-7c778bb748-8jgv8" WorkloadEndpoint="ip--172--31--24--18-k8s-goldmane--7c778bb748--8jgv8-eth0" Nov 23 22:59:03.818099 containerd[1993]: time="2025-11-23T22:59:03.817170168Z" level=info msg="connecting to shim 80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4" address="unix:///run/containerd/s/741bff51ef08130865a6a7935fcff50bf46fea7212166fc65c9b6aaca45663c4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:03.861230 containerd[1993]: time="2025-11-23T22:59:03.861105588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9hk4l,Uid:f5f68273-555b-4e0e-98f1-1cec4181626f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f\"" Nov 23 22:59:03.886185 containerd[1993]: time="2025-11-23T22:59:03.886018128Z" level=info msg="CreateContainer within sandbox \"0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:59:03.978037 systemd[1]: Started cri-containerd-80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4.scope - libcontainer container 80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4. Nov 23 22:59:03.998631 containerd[1993]: time="2025-11-23T22:59:03.995731404Z" level=info msg="Container 518e0c73169538a6016b6e29362299d203d9b222f5593fb9b060e2488eba7aa9: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:04.005690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065934064.mount: Deactivated successfully. Nov 23 22:59:04.027519 containerd[1993]: time="2025-11-23T22:59:04.026869125Z" level=info msg="CreateContainer within sandbox \"0e90ad44f97575f17d13c9a3a9fcf0a2cd2d211361c707732901539e7491781f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"518e0c73169538a6016b6e29362299d203d9b222f5593fb9b060e2488eba7aa9\"" Nov 23 22:59:04.029373 containerd[1993]: time="2025-11-23T22:59:04.029311833Z" level=info msg="StartContainer for \"518e0c73169538a6016b6e29362299d203d9b222f5593fb9b060e2488eba7aa9\"" Nov 23 22:59:04.034454 containerd[1993]: time="2025-11-23T22:59:04.034341441Z" level=info msg="connecting to shim 518e0c73169538a6016b6e29362299d203d9b222f5593fb9b060e2488eba7aa9" address="unix:///run/containerd/s/4dd609134b2cba04d0b8de3de212d7f69552b217997decb3bd99ba2de6c10442" protocol=ttrpc version=3 Nov 23 22:59:04.107980 systemd[1]: Started cri-containerd-518e0c73169538a6016b6e29362299d203d9b222f5593fb9b060e2488eba7aa9.scope - libcontainer container 518e0c73169538a6016b6e29362299d203d9b222f5593fb9b060e2488eba7aa9. Nov 23 22:59:04.177079 containerd[1993]: time="2025-11-23T22:59:04.176983149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lh757,Uid:e89989bc-946c-40b9-a2fe-b6be9daeb141,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"849ce7853087dc8c6dfd353273e5dfd46aa482bf3986b65fe2b1a2314725c7d9\"" Nov 23 22:59:04.185661 containerd[1993]: time="2025-11-23T22:59:04.185156013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:04.262379 containerd[1993]: time="2025-11-23T22:59:04.262204666Z" level=info msg="StartContainer for \"518e0c73169538a6016b6e29362299d203d9b222f5593fb9b060e2488eba7aa9\" returns successfully" Nov 23 22:59:04.288711 containerd[1993]: time="2025-11-23T22:59:04.288657850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-8jgv8,Uid:bb092924-5640-4734-8a43-16aa063b77ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"80e831557b07efe8e74e8ed6c7948051ab470bd9ac81b27c729ad4a8119500a4\"" Nov 23 22:59:04.345068 kubelet[3551]: E1123 22:59:04.344948 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:59:04.415076 kubelet[3551]: I1123 22:59:04.414615 3551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9hk4l" podStartSLOduration=53.414574307 podStartE2EDuration="53.414574307s" podCreationTimestamp="2025-11-23 22:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:59:04.385752526 +0000 UTC m=+57.854322444" watchObservedRunningTime="2025-11-23 22:59:04.414574307 +0000 UTC m=+57.883144201" Nov 23 22:59:04.519275 containerd[1993]: time="2025-11-23T22:59:04.518959427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:04.522494 containerd[1993]: time="2025-11-23T22:59:04.522396635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:04.522754 containerd[1993]: time="2025-11-23T22:59:04.522479111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:04.523065 kubelet[3551]: E1123 22:59:04.523008 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:04.523065 kubelet[3551]: E1123 22:59:04.523078 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:04.523890 containerd[1993]: time="2025-11-23T22:59:04.523772615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:59:04.524211 kubelet[3551]: E1123 22:59:04.524149 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lh757_calico-apiserver(e89989bc-946c-40b9-a2fe-b6be9daeb141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:04.524298 kubelet[3551]: E1123 22:59:04.524238 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 22:59:04.771090 containerd[1993]: time="2025-11-23T22:59:04.770472792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:04.772814 containerd[1993]: time="2025-11-23T22:59:04.772748988Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:59:04.772924 containerd[1993]: time="2025-11-23T22:59:04.772881720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:04.773207 kubelet[3551]: E1123 22:59:04.773142 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:04.773314 kubelet[3551]: E1123 22:59:04.773219 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:04.773372 kubelet[3551]: E1123 22:59:04.773331 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8jgv8_calico-system(bb092924-5640-4734-8a43-16aa063b77ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:04.773438 kubelet[3551]: E1123 22:59:04.773381 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 22:59:04.862219 containerd[1993]: time="2025-11-23T22:59:04.861989113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lrpdp,Uid:8dcbc37e-9145-4538-9ae4-0ee44fb84086,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:59:04.899984 systemd-networkd[1849]: cali7ddd59b9b02: Gained IPv6LL Nov 23 22:59:04.962013 systemd-networkd[1849]: cali7e703f6ff9b: Gained IPv6LL Nov 23 22:59:05.159045 systemd-networkd[1849]: cali571f46d39d0: Link UP Nov 23 22:59:05.163904 systemd-networkd[1849]: cali571f46d39d0: Gained carrier Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:04.964 [INFO][5532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0 calico-apiserver-786df89cbb- calico-apiserver 8dcbc37e-9145-4538-9ae4-0ee44fb84086 856 0 2025-11-23 22:58:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:786df89cbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-18 calico-apiserver-786df89cbb-lrpdp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali571f46d39d0 [] [] }} ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:04.965 [INFO][5532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.051 [INFO][5544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" HandleID="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Workload="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.053 [INFO][5544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" HandleID="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Workload="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000341be0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-18", "pod":"calico-apiserver-786df89cbb-lrpdp", "timestamp":"2025-11-23 22:59:05.05156803 +0000 UTC"}, Hostname:"ip-172-31-24-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.053 [INFO][5544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.053 [INFO][5544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.053 [INFO][5544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-18' Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.072 [INFO][5544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.080 [INFO][5544] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.088 [INFO][5544] ipam/ipam.go 511: Trying affinity for 192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.091 [INFO][5544] ipam/ipam.go 158: Attempting to load block cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.097 [INFO][5544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.42.128/26 host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.097 [INFO][5544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.42.128/26 handle="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.101 [INFO][5544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.113 [INFO][5544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.42.128/26 handle="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.142 [INFO][5544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.42.137/26] block=192.168.42.128/26 handle="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.142 [INFO][5544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.42.137/26] handle="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" host="ip-172-31-24-18" Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.142 [INFO][5544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:59:05.196264 containerd[1993]: 2025-11-23 22:59:05.143 [INFO][5544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.42.137/26] IPv6=[] ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" HandleID="k8s-pod-network.5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Workload="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" Nov 23 22:59:05.199864 containerd[1993]: 2025-11-23 22:59:05.147 [INFO][5532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0", GenerateName:"calico-apiserver-786df89cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8dcbc37e-9145-4538-9ae4-0ee44fb84086", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786df89cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"", Pod:"calico-apiserver-786df89cbb-lrpdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali571f46d39d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:05.199864 containerd[1993]: 2025-11-23 22:59:05.148 [INFO][5532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.137/32] ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" Nov 23 22:59:05.199864 containerd[1993]: 2025-11-23 22:59:05.148 [INFO][5532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali571f46d39d0 ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" Nov 23 22:59:05.199864 containerd[1993]: 2025-11-23 22:59:05.166 [INFO][5532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" Nov 23 22:59:05.199864 containerd[1993]: 2025-11-23 22:59:05.166 [INFO][5532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0", GenerateName:"calico-apiserver-786df89cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"8dcbc37e-9145-4538-9ae4-0ee44fb84086", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"786df89cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-18", ContainerID:"5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a", Pod:"calico-apiserver-786df89cbb-lrpdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali571f46d39d0", MAC:"22:5d:15:fb:48:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:59:05.199864 containerd[1993]: 2025-11-23 22:59:05.190 [INFO][5532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" Namespace="calico-apiserver" Pod="calico-apiserver-786df89cbb-lrpdp" WorkloadEndpoint="ip--172--31--24--18-k8s-calico--apiserver--786df89cbb--lrpdp-eth0" Nov 23 22:59:05.271365 containerd[1993]: time="2025-11-23T22:59:05.271088399Z" level=info msg="connecting to shim 5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a" address="unix:///run/containerd/s/bf6bf62bd4b0ff5a8e1203bc507bc72285eb6821065a91859fd451c0d4a35e02" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:05.346684 systemd-networkd[1849]: cali72b0a38c41a: Gained IPv6LL Nov 23 22:59:05.350540 systemd[1]: Started cri-containerd-5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a.scope - libcontainer container 5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a. Nov 23 22:59:05.366736 kubelet[3551]: E1123 22:59:05.366534 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 22:59:05.373070 kubelet[3551]: E1123 22:59:05.372992 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 22:59:05.630783 containerd[1993]: time="2025-11-23T22:59:05.628923337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-786df89cbb-lrpdp,Uid:8dcbc37e-9145-4538-9ae4-0ee44fb84086,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5098c9c7f27fd62c2a68adc36e5e16f16d651155837716b709bf7fb6bed1275a\"" Nov 23 22:59:05.636325 containerd[1993]: time="2025-11-23T22:59:05.636123841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:05.909385 containerd[1993]: time="2025-11-23T22:59:05.908863862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:05.911805 containerd[1993]: time="2025-11-23T22:59:05.911428070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:05.911805 containerd[1993]: time="2025-11-23T22:59:05.911600858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:05.912293 kubelet[3551]: E1123 22:59:05.912188 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:05.912391 kubelet[3551]: E1123 22:59:05.912297 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:05.912692 kubelet[3551]: E1123 22:59:05.912637 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lrpdp_calico-apiserver(8dcbc37e-9145-4538-9ae4-0ee44fb84086): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:05.913025 kubelet[3551]: E1123 22:59:05.912841 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:59:06.366579 kubelet[3551]: E1123 22:59:06.365778 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:59:06.964021 systemd-networkd[1849]: cali571f46d39d0: Gained IPv6LL Nov 23 22:59:07.366422 kubelet[3551]: E1123 22:59:07.366362 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:59:09.137212 ntpd[2168]: Listen normally on 6 vxlan.calico 192.168.42.128:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 6 vxlan.calico 192.168.42.128:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 7 cali5e8d1d7b3a4 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 8 vxlan.calico [fe80::647b:9fff:fe94:ae40%5]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 9 cali9e34eaa28db [fe80::ecee:eeff:feee:eeee%8]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 10 cali15b4d8df00c [fe80::ecee:eeff:feee:eeee%9]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 11 cali1637b27026e [fe80::ecee:eeff:feee:eeee%10]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 12 cali4dac36a6bad [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 13 cali7e703f6ff9b [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 14 cali72b0a38c41a [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 15 cali7ddd59b9b02 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 22:59:09.137869 ntpd[2168]: 23 Nov 22:59:09 ntpd[2168]: Listen normally on 16 cali571f46d39d0 [fe80::ecee:eeff:feee:eeee%15]:123 Nov 23 22:59:09.137302 ntpd[2168]: Listen normally on 7 cali5e8d1d7b3a4 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 22:59:09.137351 ntpd[2168]: Listen normally on 8 vxlan.calico [fe80::647b:9fff:fe94:ae40%5]:123 Nov 23 22:59:09.137402 ntpd[2168]: Listen normally on 9 cali9e34eaa28db [fe80::ecee:eeff:feee:eeee%8]:123 Nov 23 22:59:09.137445 ntpd[2168]: Listen normally on 10 cali15b4d8df00c [fe80::ecee:eeff:feee:eeee%9]:123 Nov 23 22:59:09.137489 ntpd[2168]: Listen normally on 11 cali1637b27026e [fe80::ecee:eeff:feee:eeee%10]:123 Nov 23 22:59:09.137533 ntpd[2168]: Listen normally on 12 cali4dac36a6bad [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 22:59:09.137575 ntpd[2168]: Listen normally on 13 cali7e703f6ff9b [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 22:59:09.137648 ntpd[2168]: Listen normally on 14 cali72b0a38c41a [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 22:59:09.137696 ntpd[2168]: Listen normally on 15 cali7ddd59b9b02 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 22:59:09.137739 ntpd[2168]: Listen normally on 16 cali571f46d39d0 [fe80::ecee:eeff:feee:eeee%15]:123 Nov 23 22:59:09.870242 systemd[1]: Started sshd@9-172.31.24.18:22-139.178.68.195:46892.service - OpenSSH per-connection server daemon (139.178.68.195:46892). Nov 23 22:59:10.082200 sshd[5629]: Accepted publickey for core from 139.178.68.195 port 46892 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:10.088346 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:10.098992 systemd-logind[1977]: New session 10 of user core. Nov 23 22:59:10.105847 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 22:59:10.461953 sshd[5632]: Connection closed by 139.178.68.195 port 46892 Nov 23 22:59:10.464869 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:10.477488 systemd[1]: sshd@9-172.31.24.18:22-139.178.68.195:46892.service: Deactivated successfully. Nov 23 22:59:10.477930 systemd-logind[1977]: Session 10 logged out. Waiting for processes to exit. Nov 23 22:59:10.484935 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 22:59:10.490817 systemd-logind[1977]: Removed session 10. Nov 23 22:59:10.863243 containerd[1993]: time="2025-11-23T22:59:10.863163667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:59:11.136107 containerd[1993]: time="2025-11-23T22:59:11.134968672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:11.139266 containerd[1993]: time="2025-11-23T22:59:11.139111456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:59:11.139266 containerd[1993]: time="2025-11-23T22:59:11.139175368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:59:11.140051 kubelet[3551]: E1123 22:59:11.139657 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:11.140051 kubelet[3551]: E1123 22:59:11.139724 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:11.140051 kubelet[3551]: E1123 22:59:11.139874 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:11.144931 containerd[1993]: time="2025-11-23T22:59:11.144853516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:59:11.416675 containerd[1993]: time="2025-11-23T22:59:11.416359817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:11.419219 containerd[1993]: time="2025-11-23T22:59:11.419047337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:59:11.419219 containerd[1993]: time="2025-11-23T22:59:11.419074049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:11.419655 kubelet[3551]: E1123 22:59:11.419570 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:11.420309 kubelet[3551]: E1123 22:59:11.419942 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:11.420309 kubelet[3551]: E1123 22:59:11.420083 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:11.420309 kubelet[3551]: E1123 22:59:11.420150 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 22:59:13.859948 containerd[1993]: time="2025-11-23T22:59:13.859836729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:14.137795 containerd[1993]: time="2025-11-23T22:59:14.137575411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:14.140705 containerd[1993]: time="2025-11-23T22:59:14.140569519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:14.140900 containerd[1993]: time="2025-11-23T22:59:14.140611291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:14.141207 kubelet[3551]: E1123 22:59:14.141146 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:14.142134 kubelet[3551]: E1123 22:59:14.141798 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:14.142134 kubelet[3551]: E1123 22:59:14.141984 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-855cfc6487-7qv5g_calico-apiserver(1ce6cd5f-eadc-464f-af0e-bacaebe7e59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:14.142134 kubelet[3551]: E1123 22:59:14.142060 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:59:15.505494 systemd[1]: Started sshd@10-172.31.24.18:22-139.178.68.195:43804.service - OpenSSH per-connection server daemon (139.178.68.195:43804). Nov 23 22:59:15.716515 sshd[5651]: Accepted publickey for core from 139.178.68.195 port 43804 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:15.719497 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:15.727204 systemd-logind[1977]: New session 11 of user core. Nov 23 22:59:15.736844 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 22:59:15.860104 containerd[1993]: time="2025-11-23T22:59:15.860039831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:59:15.998927 sshd[5654]: Connection closed by 139.178.68.195 port 43804 Nov 23 22:59:15.998872 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:16.006475 systemd[1]: sshd@10-172.31.24.18:22-139.178.68.195:43804.service: Deactivated successfully. Nov 23 22:59:16.010317 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 22:59:16.012982 systemd-logind[1977]: Session 11 logged out. Waiting for processes to exit. Nov 23 22:59:16.016247 systemd-logind[1977]: Removed session 11. Nov 23 22:59:16.150365 containerd[1993]: time="2025-11-23T22:59:16.149896593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:16.152362 containerd[1993]: time="2025-11-23T22:59:16.152291457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:59:16.152519 containerd[1993]: time="2025-11-23T22:59:16.152412129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:59:16.152877 kubelet[3551]: E1123 22:59:16.152819 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:16.153352 kubelet[3551]: E1123 22:59:16.152889 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:16.153352 kubelet[3551]: E1123 22:59:16.153000 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:16.155299 containerd[1993]: time="2025-11-23T22:59:16.155115237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:59:16.416977 containerd[1993]: time="2025-11-23T22:59:16.416817058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:16.419871 containerd[1993]: time="2025-11-23T22:59:16.419760658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:59:16.420353 containerd[1993]: time="2025-11-23T22:59:16.419826322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:59:16.420751 kubelet[3551]: E1123 22:59:16.420634 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:16.420861 kubelet[3551]: E1123 22:59:16.420771 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:16.422013 kubelet[3551]: E1123 22:59:16.421944 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:16.422229 kubelet[3551]: E1123 22:59:16.422042 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:59:17.859867 containerd[1993]: time="2025-11-23T22:59:17.859671553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:59:18.122717 containerd[1993]: time="2025-11-23T22:59:18.122503451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:18.124833 containerd[1993]: time="2025-11-23T22:59:18.124763627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:59:18.124927 containerd[1993]: time="2025-11-23T22:59:18.124903007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:18.125218 kubelet[3551]: E1123 22:59:18.125155 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:18.125759 kubelet[3551]: E1123 22:59:18.125231 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:18.125759 kubelet[3551]: E1123 22:59:18.125333 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8jgv8_calico-system(bb092924-5640-4734-8a43-16aa063b77ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:18.125759 kubelet[3551]: E1123 22:59:18.125382 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 22:59:18.861809 containerd[1993]: time="2025-11-23T22:59:18.860945294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:59:19.140531 containerd[1993]: time="2025-11-23T22:59:19.140261112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:19.142636 containerd[1993]: time="2025-11-23T22:59:19.142484436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:59:19.142760 containerd[1993]: time="2025-11-23T22:59:19.142541388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:19.143046 kubelet[3551]: E1123 22:59:19.142988 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:19.144125 kubelet[3551]: E1123 22:59:19.143057 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:19.144125 kubelet[3551]: E1123 22:59:19.143550 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f8f8556cf-9rc8j_calico-system(ed04d8fd-f316-436f-a1ef-e581bd3f494a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:19.144125 kubelet[3551]: E1123 22:59:19.143713 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:59:19.144550 containerd[1993]: time="2025-11-23T22:59:19.143476008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:19.391329 containerd[1993]: time="2025-11-23T22:59:19.391103401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:19.393300 containerd[1993]: time="2025-11-23T22:59:19.393234277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:19.393460 containerd[1993]: time="2025-11-23T22:59:19.393363817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:19.393981 kubelet[3551]: E1123 22:59:19.393701 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:19.393981 kubelet[3551]: E1123 22:59:19.393764 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:19.394182 kubelet[3551]: E1123 22:59:19.393988 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lrpdp_calico-apiserver(8dcbc37e-9145-4538-9ae4-0ee44fb84086): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:19.394182 kubelet[3551]: E1123 22:59:19.394095 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:59:19.395138 containerd[1993]: time="2025-11-23T22:59:19.394669669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:19.673701 containerd[1993]: time="2025-11-23T22:59:19.673520114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:19.676212 containerd[1993]: time="2025-11-23T22:59:19.676110374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:19.676514 containerd[1993]: time="2025-11-23T22:59:19.676311614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:19.677255 kubelet[3551]: E1123 22:59:19.676956 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:19.677255 kubelet[3551]: E1123 22:59:19.677021 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:19.677255 kubelet[3551]: E1123 22:59:19.677149 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lh757_calico-apiserver(e89989bc-946c-40b9-a2fe-b6be9daeb141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:19.677255 kubelet[3551]: E1123 22:59:19.677201 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 22:59:21.044229 systemd[1]: Started sshd@11-172.31.24.18:22-139.178.68.195:53604.service - OpenSSH per-connection server daemon (139.178.68.195:53604). Nov 23 22:59:21.246559 sshd[5675]: Accepted publickey for core from 139.178.68.195 port 53604 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:21.249032 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:21.256776 systemd-logind[1977]: New session 12 of user core. Nov 23 22:59:21.261980 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 22:59:21.516806 sshd[5678]: Connection closed by 139.178.68.195 port 53604 Nov 23 22:59:21.517843 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:21.528885 systemd-logind[1977]: Session 12 logged out. Waiting for processes to exit. Nov 23 22:59:21.530579 systemd[1]: sshd@11-172.31.24.18:22-139.178.68.195:53604.service: Deactivated successfully. Nov 23 22:59:21.535171 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 22:59:21.554478 systemd-logind[1977]: Removed session 12. Nov 23 22:59:21.557048 systemd[1]: Started sshd@12-172.31.24.18:22-139.178.68.195:53612.service - OpenSSH per-connection server daemon (139.178.68.195:53612). Nov 23 22:59:21.751003 sshd[5691]: Accepted publickey for core from 139.178.68.195 port 53612 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:21.753455 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:21.763438 systemd-logind[1977]: New session 13 of user core. Nov 23 22:59:21.768196 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 22:59:22.105815 sshd[5694]: Connection closed by 139.178.68.195 port 53612 Nov 23 22:59:22.109823 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:22.122188 systemd[1]: sshd@12-172.31.24.18:22-139.178.68.195:53612.service: Deactivated successfully. Nov 23 22:59:22.134241 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 22:59:22.139489 systemd-logind[1977]: Session 13 logged out. Waiting for processes to exit. Nov 23 22:59:22.165519 systemd[1]: Started sshd@13-172.31.24.18:22-139.178.68.195:53628.service - OpenSSH per-connection server daemon (139.178.68.195:53628). Nov 23 22:59:22.168202 systemd-logind[1977]: Removed session 13. Nov 23 22:59:22.375640 sshd[5704]: Accepted publickey for core from 139.178.68.195 port 53628 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:22.377517 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:22.386688 systemd-logind[1977]: New session 14 of user core. Nov 23 22:59:22.394854 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 22:59:22.654382 sshd[5707]: Connection closed by 139.178.68.195 port 53628 Nov 23 22:59:22.655210 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:22.665242 systemd[1]: sshd@13-172.31.24.18:22-139.178.68.195:53628.service: Deactivated successfully. Nov 23 22:59:22.671375 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 22:59:22.673671 systemd-logind[1977]: Session 14 logged out. Waiting for processes to exit. Nov 23 22:59:22.677406 systemd-logind[1977]: Removed session 14. Nov 23 22:59:24.862875 kubelet[3551]: E1123 22:59:24.862518 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 22:59:27.694891 systemd[1]: Started sshd@14-172.31.24.18:22-139.178.68.195:53636.service - OpenSSH per-connection server daemon (139.178.68.195:53636). Nov 23 22:59:27.890491 sshd[5752]: Accepted publickey for core from 139.178.68.195 port 53636 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:27.893094 sshd-session[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:27.902177 systemd-logind[1977]: New session 15 of user core. Nov 23 22:59:27.910955 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 22:59:28.168517 sshd[5755]: Connection closed by 139.178.68.195 port 53636 Nov 23 22:59:28.169030 sshd-session[5752]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:28.175524 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 22:59:28.177202 systemd[1]: sshd@14-172.31.24.18:22-139.178.68.195:53636.service: Deactivated successfully. Nov 23 22:59:28.184047 systemd-logind[1977]: Session 15 logged out. Waiting for processes to exit. Nov 23 22:59:28.187845 systemd-logind[1977]: Removed session 15. Nov 23 22:59:28.861935 kubelet[3551]: E1123 22:59:28.861741 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:59:30.861682 kubelet[3551]: E1123 22:59:30.860465 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:59:30.864504 kubelet[3551]: E1123 22:59:30.863911 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 22:59:31.858996 kubelet[3551]: E1123 22:59:31.858911 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 22:59:31.862650 kubelet[3551]: E1123 22:59:31.862067 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:59:32.860185 kubelet[3551]: E1123 22:59:32.859558 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:59:33.211177 systemd[1]: Started sshd@15-172.31.24.18:22-139.178.68.195:49028.service - OpenSSH per-connection server daemon (139.178.68.195:49028). Nov 23 22:59:33.413651 sshd[5768]: Accepted publickey for core from 139.178.68.195 port 49028 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:33.415061 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:33.423222 systemd-logind[1977]: New session 16 of user core. Nov 23 22:59:33.438901 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 22:59:33.708065 sshd[5771]: Connection closed by 139.178.68.195 port 49028 Nov 23 22:59:33.709628 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:33.717218 systemd[1]: sshd@15-172.31.24.18:22-139.178.68.195:49028.service: Deactivated successfully. Nov 23 22:59:33.726540 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 22:59:33.731152 systemd-logind[1977]: Session 16 logged out. Waiting for processes to exit. Nov 23 22:59:33.735460 systemd-logind[1977]: Removed session 16. Nov 23 22:59:35.861822 containerd[1993]: time="2025-11-23T22:59:35.860769955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:59:36.135185 containerd[1993]: time="2025-11-23T22:59:36.134853244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:36.137278 containerd[1993]: time="2025-11-23T22:59:36.137074684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:59:36.137577 containerd[1993]: time="2025-11-23T22:59:36.137213908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:59:36.138209 kubelet[3551]: E1123 22:59:36.137987 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:36.138209 kubelet[3551]: E1123 22:59:36.138058 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:36.139868 kubelet[3551]: E1123 22:59:36.138174 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:36.144265 containerd[1993]: time="2025-11-23T22:59:36.144089596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:59:36.384363 containerd[1993]: time="2025-11-23T22:59:36.384286865Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:36.386713 containerd[1993]: time="2025-11-23T22:59:36.386421341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:59:36.386713 containerd[1993]: time="2025-11-23T22:59:36.386541485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:36.386881 kubelet[3551]: E1123 22:59:36.386768 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:36.386881 kubelet[3551]: E1123 22:59:36.386828 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:36.387024 kubelet[3551]: E1123 22:59:36.386933 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:36.387087 kubelet[3551]: E1123 22:59:36.386999 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 22:59:38.757607 systemd[1]: Started sshd@16-172.31.24.18:22-139.178.68.195:49044.service - OpenSSH per-connection server daemon (139.178.68.195:49044). Nov 23 22:59:38.972335 sshd[5786]: Accepted publickey for core from 139.178.68.195 port 49044 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:38.974720 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:38.982994 systemd-logind[1977]: New session 17 of user core. Nov 23 22:59:38.991919 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 22:59:39.484711 sshd[5789]: Connection closed by 139.178.68.195 port 49044 Nov 23 22:59:39.485188 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:39.492014 systemd[1]: sshd@16-172.31.24.18:22-139.178.68.195:49044.service: Deactivated successfully. Nov 23 22:59:39.497488 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 22:59:39.500729 systemd-logind[1977]: Session 17 logged out. Waiting for processes to exit. Nov 23 22:59:39.504680 systemd-logind[1977]: Removed session 17. Nov 23 22:59:42.863566 containerd[1993]: time="2025-11-23T22:59:42.863505950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:59:43.159979 containerd[1993]: time="2025-11-23T22:59:43.159396599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:43.162143 containerd[1993]: time="2025-11-23T22:59:43.161965463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:59:43.162143 containerd[1993]: time="2025-11-23T22:59:43.162089423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:43.162504 kubelet[3551]: E1123 22:59:43.162377 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:43.163070 kubelet[3551]: E1123 22:59:43.162524 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:43.163070 kubelet[3551]: E1123 22:59:43.162705 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8jgv8_calico-system(bb092924-5640-4734-8a43-16aa063b77ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:43.163070 kubelet[3551]: E1123 22:59:43.162757 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 22:59:43.860252 containerd[1993]: time="2025-11-23T22:59:43.860000714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:59:44.138178 containerd[1993]: time="2025-11-23T22:59:44.138019212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:44.140410 containerd[1993]: time="2025-11-23T22:59:44.140293704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:59:44.140410 containerd[1993]: time="2025-11-23T22:59:44.140356788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:44.140900 kubelet[3551]: E1123 22:59:44.140824 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:44.140995 kubelet[3551]: E1123 22:59:44.140899 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:44.141553 containerd[1993]: time="2025-11-23T22:59:44.141497316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:59:44.141700 kubelet[3551]: E1123 22:59:44.141657 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f8f8556cf-9rc8j_calico-system(ed04d8fd-f316-436f-a1ef-e581bd3f494a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:44.141819 kubelet[3551]: E1123 22:59:44.141724 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:59:44.286954 systemd[1]: Started sshd@17-172.31.24.18:22-139.178.68.195:39804.service - OpenSSH per-connection server daemon (139.178.68.195:39804). Nov 23 22:59:44.372879 containerd[1993]: time="2025-11-23T22:59:44.372786169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:44.375120 containerd[1993]: time="2025-11-23T22:59:44.374977093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:59:44.375120 containerd[1993]: time="2025-11-23T22:59:44.375056029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:59:44.378089 kubelet[3551]: E1123 22:59:44.378014 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:44.379637 kubelet[3551]: E1123 22:59:44.378728 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:44.379637 kubelet[3551]: E1123 22:59:44.379171 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:44.380632 containerd[1993]: time="2025-11-23T22:59:44.380347309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:44.498315 sshd[5808]: Accepted publickey for core from 139.178.68.195 port 39804 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:44.503166 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:44.516675 systemd-logind[1977]: New session 18 of user core. Nov 23 22:59:44.521025 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 22:59:44.646805 containerd[1993]: time="2025-11-23T22:59:44.646742174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:44.649069 containerd[1993]: time="2025-11-23T22:59:44.648959714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:44.649069 containerd[1993]: time="2025-11-23T22:59:44.649026842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:44.649865 kubelet[3551]: E1123 22:59:44.649680 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:44.650086 kubelet[3551]: E1123 22:59:44.649839 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:44.650715 kubelet[3551]: E1123 22:59:44.650567 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-855cfc6487-7qv5g_calico-apiserver(1ce6cd5f-eadc-464f-af0e-bacaebe7e59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:44.650913 kubelet[3551]: E1123 22:59:44.650678 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:59:44.652651 containerd[1993]: time="2025-11-23T22:59:44.652192958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:59:44.834027 sshd[5811]: Connection closed by 139.178.68.195 port 39804 Nov 23 22:59:44.834517 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:44.842870 systemd[1]: sshd@17-172.31.24.18:22-139.178.68.195:39804.service: Deactivated successfully. Nov 23 22:59:44.848457 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 22:59:44.852561 systemd-logind[1977]: Session 18 logged out. Waiting for processes to exit. Nov 23 22:59:44.857008 systemd-logind[1977]: Removed session 18. Nov 23 22:59:44.878439 systemd[1]: Started sshd@18-172.31.24.18:22-139.178.68.195:39806.service - OpenSSH per-connection server daemon (139.178.68.195:39806). Nov 23 22:59:44.921911 containerd[1993]: time="2025-11-23T22:59:44.921329560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:44.925440 containerd[1993]: time="2025-11-23T22:59:44.925101136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:59:44.926868 containerd[1993]: time="2025-11-23T22:59:44.925467112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:59:44.929505 kubelet[3551]: E1123 22:59:44.929422 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:44.929734 kubelet[3551]: E1123 22:59:44.929506 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:44.932105 kubelet[3551]: E1123 22:59:44.932038 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:44.932783 containerd[1993]: time="2025-11-23T22:59:44.932386780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:44.934284 kubelet[3551]: E1123 22:59:44.933636 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:59:45.106351 sshd[5823]: Accepted publickey for core from 139.178.68.195 port 39806 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:45.112965 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:45.129511 systemd-logind[1977]: New session 19 of user core. Nov 23 22:59:45.136915 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 22:59:45.197179 containerd[1993]: time="2025-11-23T22:59:45.196952017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:45.199450 containerd[1993]: time="2025-11-23T22:59:45.199258513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:45.199450 containerd[1993]: time="2025-11-23T22:59:45.199393573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:45.199846 kubelet[3551]: E1123 22:59:45.199738 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:45.199928 kubelet[3551]: E1123 22:59:45.199839 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:45.200362 kubelet[3551]: E1123 22:59:45.200000 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lrpdp_calico-apiserver(8dcbc37e-9145-4538-9ae4-0ee44fb84086): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:45.200624 kubelet[3551]: E1123 22:59:45.200508 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:59:45.693069 sshd[5826]: Connection closed by 139.178.68.195 port 39806 Nov 23 22:59:45.694056 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:45.701574 systemd[1]: sshd@18-172.31.24.18:22-139.178.68.195:39806.service: Deactivated successfully. Nov 23 22:59:45.706737 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 22:59:45.708888 systemd-logind[1977]: Session 19 logged out. Waiting for processes to exit. Nov 23 22:59:45.712699 systemd-logind[1977]: Removed session 19. Nov 23 22:59:45.732058 systemd[1]: Started sshd@19-172.31.24.18:22-139.178.68.195:39810.service - OpenSSH per-connection server daemon (139.178.68.195:39810). Nov 23 22:59:45.860969 containerd[1993]: time="2025-11-23T22:59:45.860899384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:45.940371 sshd[5838]: Accepted publickey for core from 139.178.68.195 port 39810 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:45.942799 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:45.951052 systemd-logind[1977]: New session 20 of user core. Nov 23 22:59:45.957867 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 22:59:46.114476 containerd[1993]: time="2025-11-23T22:59:46.114420650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:46.116984 containerd[1993]: time="2025-11-23T22:59:46.116912702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:46.117242 containerd[1993]: time="2025-11-23T22:59:46.116968106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:46.117478 kubelet[3551]: E1123 22:59:46.117423 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:46.118628 kubelet[3551]: E1123 22:59:46.117490 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:46.119798 kubelet[3551]: E1123 22:59:46.118334 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lh757_calico-apiserver(e89989bc-946c-40b9-a2fe-b6be9daeb141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:46.119951 kubelet[3551]: E1123 22:59:46.119823 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 22:59:47.061015 systemd[1]: Started sshd@20-172.31.24.18:22-139.178.68.195:39816.service - OpenSSH per-connection server daemon (139.178.68.195:39816). Nov 23 22:59:47.253061 sshd[5841]: Connection closed by 139.178.68.195 port 39810 Nov 23 22:59:47.255982 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:47.265210 systemd[1]: sshd@19-172.31.24.18:22-139.178.68.195:39810.service: Deactivated successfully. Nov 23 22:59:47.270545 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 22:59:47.273771 systemd-logind[1977]: Session 20 logged out. Waiting for processes to exit. Nov 23 22:59:47.279495 systemd-logind[1977]: Removed session 20. Nov 23 22:59:47.291914 sshd[5855]: Accepted publickey for core from 139.178.68.195 port 39816 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:47.294387 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:47.303757 systemd-logind[1977]: New session 21 of user core. Nov 23 22:59:47.314877 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 22:59:47.951923 systemd[1]: Started sshd@21-172.31.24.18:22-139.178.68.195:39818.service - OpenSSH per-connection server daemon (139.178.68.195:39818). Nov 23 22:59:48.141086 sshd[5861]: Connection closed by 139.178.68.195 port 39816 Nov 23 22:59:48.141502 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:48.151399 systemd[1]: sshd@20-172.31.24.18:22-139.178.68.195:39816.service: Deactivated successfully. Nov 23 22:59:48.152935 sshd[5868]: Accepted publickey for core from 139.178.68.195 port 39818 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:48.156216 sshd-session[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:48.157195 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 22:59:48.162565 systemd-logind[1977]: Session 21 logged out. Waiting for processes to exit. Nov 23 22:59:48.166899 systemd-logind[1977]: Removed session 21. Nov 23 22:59:48.174063 systemd-logind[1977]: New session 22 of user core. Nov 23 22:59:48.183923 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 22:59:48.668683 sshd[5876]: Connection closed by 139.178.68.195 port 39818 Nov 23 22:59:48.669157 sshd-session[5868]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:48.676212 systemd-logind[1977]: Session 22 logged out. Waiting for processes to exit. Nov 23 22:59:48.676410 systemd[1]: sshd@21-172.31.24.18:22-139.178.68.195:39818.service: Deactivated successfully. Nov 23 22:59:48.680740 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 22:59:48.685309 systemd-logind[1977]: Removed session 22. Nov 23 22:59:48.861423 kubelet[3551]: E1123 22:59:48.861337 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 22:59:53.478339 systemd[1]: Started sshd@22-172.31.24.18:22-139.178.68.195:52380.service - OpenSSH per-connection server daemon (139.178.68.195:52380). Nov 23 22:59:53.694531 sshd[5888]: Accepted publickey for core from 139.178.68.195 port 52380 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:53.696903 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:53.705943 systemd-logind[1977]: New session 23 of user core. Nov 23 22:59:53.714889 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 22:59:53.956425 sshd[5891]: Connection closed by 139.178.68.195 port 52380 Nov 23 22:59:53.957296 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:53.968345 systemd[1]: sshd@22-172.31.24.18:22-139.178.68.195:52380.service: Deactivated successfully. Nov 23 22:59:53.975075 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 22:59:53.977849 systemd-logind[1977]: Session 23 logged out. Waiting for processes to exit. Nov 23 22:59:53.981461 systemd-logind[1977]: Removed session 23. Nov 23 22:59:55.859637 kubelet[3551]: E1123 22:59:55.859445 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 22:59:56.863261 kubelet[3551]: E1123 22:59:56.862526 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 22:59:56.866627 kubelet[3551]: E1123 22:59:56.866267 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 22:59:56.866627 kubelet[3551]: E1123 22:59:56.866453 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 22:59:58.859748 kubelet[3551]: E1123 22:59:58.859207 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 22:59:58.862700 kubelet[3551]: E1123 22:59:58.862577 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 22:59:59.002083 systemd[1]: Started sshd@23-172.31.24.18:22-139.178.68.195:52384.service - OpenSSH per-connection server daemon (139.178.68.195:52384). Nov 23 22:59:59.210262 sshd[5926]: Accepted publickey for core from 139.178.68.195 port 52384 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:59.212855 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:59.221667 systemd-logind[1977]: New session 24 of user core. Nov 23 22:59:59.229892 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 23 22:59:59.757045 sshd[5929]: Connection closed by 139.178.68.195 port 52384 Nov 23 22:59:59.758473 sshd-session[5926]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:59.769536 systemd[1]: sshd@23-172.31.24.18:22-139.178.68.195:52384.service: Deactivated successfully. Nov 23 22:59:59.775216 systemd[1]: session-24.scope: Deactivated successfully. Nov 23 22:59:59.777640 systemd-logind[1977]: Session 24 logged out. Waiting for processes to exit. Nov 23 22:59:59.781964 systemd-logind[1977]: Removed session 24. Nov 23 22:59:59.861055 kubelet[3551]: E1123 22:59:59.860924 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 23:00:04.575778 systemd[1]: Started sshd@24-172.31.24.18:22-139.178.68.195:34464.service - OpenSSH per-connection server daemon (139.178.68.195:34464). Nov 23 23:00:04.798170 sshd[5946]: Accepted publickey for core from 139.178.68.195 port 34464 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:04.801042 sshd-session[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:04.815692 systemd-logind[1977]: New session 25 of user core. Nov 23 23:00:04.821032 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 23 23:00:05.144535 sshd[5949]: Connection closed by 139.178.68.195 port 34464 Nov 23 23:00:05.145998 sshd-session[5946]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:05.157965 systemd[1]: sshd@24-172.31.24.18:22-139.178.68.195:34464.service: Deactivated successfully. Nov 23 23:00:05.167138 systemd[1]: session-25.scope: Deactivated successfully. Nov 23 23:00:05.173348 systemd-logind[1977]: Session 25 logged out. Waiting for processes to exit. Nov 23 23:00:05.176651 systemd-logind[1977]: Removed session 25. Nov 23 23:00:08.862044 kubelet[3551]: E1123 23:00:08.861406 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 23:00:09.860176 kubelet[3551]: E1123 23:00:09.859794 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 23:00:09.861974 kubelet[3551]: E1123 23:00:09.861881 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 23:00:10.194500 systemd[1]: Started sshd@25-172.31.24.18:22-139.178.68.195:34474.service - OpenSSH per-connection server daemon (139.178.68.195:34474). Nov 23 23:00:10.419510 sshd[5965]: Accepted publickey for core from 139.178.68.195 port 34474 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:10.422484 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:10.432658 systemd-logind[1977]: New session 26 of user core. Nov 23 23:00:10.444953 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 23 23:00:10.793509 sshd[5968]: Connection closed by 139.178.68.195 port 34474 Nov 23 23:00:10.796903 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:10.807490 systemd[1]: session-26.scope: Deactivated successfully. Nov 23 23:00:10.809561 systemd-logind[1977]: Session 26 logged out. Waiting for processes to exit. Nov 23 23:00:10.810879 systemd[1]: sshd@25-172.31.24.18:22-139.178.68.195:34474.service: Deactivated successfully. Nov 23 23:00:10.822944 systemd-logind[1977]: Removed session 26. Nov 23 23:00:10.863150 kubelet[3551]: E1123 23:00:10.862733 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 23:00:11.859334 kubelet[3551]: E1123 23:00:11.859258 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 23:00:13.859486 kubelet[3551]: E1123 23:00:13.859394 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 23:00:13.860885 kubelet[3551]: E1123 23:00:13.860742 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 23:00:15.831742 systemd[1]: Started sshd@26-172.31.24.18:22-139.178.68.195:44872.service - OpenSSH per-connection server daemon (139.178.68.195:44872). Nov 23 23:00:16.039561 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 44872 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:16.043472 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:16.055029 systemd-logind[1977]: New session 27 of user core. Nov 23 23:00:16.061899 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 23 23:00:16.418101 sshd[5985]: Connection closed by 139.178.68.195 port 44872 Nov 23 23:00:16.419319 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:16.427049 systemd-logind[1977]: Session 27 logged out. Waiting for processes to exit. Nov 23 23:00:16.427689 systemd[1]: sshd@26-172.31.24.18:22-139.178.68.195:44872.service: Deactivated successfully. Nov 23 23:00:16.432855 systemd[1]: session-27.scope: Deactivated successfully. Nov 23 23:00:16.442071 systemd-logind[1977]: Removed session 27. Nov 23 23:00:21.461119 systemd[1]: Started sshd@27-172.31.24.18:22-139.178.68.195:57258.service - OpenSSH per-connection server daemon (139.178.68.195:57258). Nov 23 23:00:21.683865 sshd[6004]: Accepted publickey for core from 139.178.68.195 port 57258 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:21.687950 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:21.700055 systemd-logind[1977]: New session 28 of user core. Nov 23 23:00:21.709898 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 23 23:00:21.996264 sshd[6007]: Connection closed by 139.178.68.195 port 57258 Nov 23 23:00:21.997146 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:22.006774 systemd[1]: sshd@27-172.31.24.18:22-139.178.68.195:57258.service: Deactivated successfully. Nov 23 23:00:22.014088 systemd[1]: session-28.scope: Deactivated successfully. Nov 23 23:00:22.020182 systemd-logind[1977]: Session 28 logged out. Waiting for processes to exit. Nov 23 23:00:22.024691 systemd-logind[1977]: Removed session 28. Nov 23 23:00:22.858162 kubelet[3551]: E1123 23:00:22.857995 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 23:00:23.860613 kubelet[3551]: E1123 23:00:23.860498 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 23:00:23.862612 kubelet[3551]: E1123 23:00:23.862384 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 23:00:23.865865 containerd[1993]: time="2025-11-23T23:00:23.865799921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:00:24.129836 containerd[1993]: time="2025-11-23T23:00:24.129678723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:24.132050 containerd[1993]: time="2025-11-23T23:00:24.131850747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:00:24.132050 containerd[1993]: time="2025-11-23T23:00:24.131989563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:24.133278 kubelet[3551]: E1123 23:00:24.132770 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:24.133278 kubelet[3551]: E1123 23:00:24.132836 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:24.133278 kubelet[3551]: E1123 23:00:24.132951 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-8jgv8_calico-system(bb092924-5640-4734-8a43-16aa063b77ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:24.133278 kubelet[3551]: E1123 23:00:24.132999 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 23:00:24.862028 kubelet[3551]: E1123 23:00:24.861953 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 23:00:24.863700 kubelet[3551]: E1123 23:00:24.863626 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 23:00:24.871805 containerd[1993]: time="2025-11-23T23:00:24.871727190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:00:25.134819 containerd[1993]: time="2025-11-23T23:00:25.134500779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:25.136938 containerd[1993]: time="2025-11-23T23:00:25.136762888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:00:25.136938 containerd[1993]: time="2025-11-23T23:00:25.136880584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:00:25.137244 kubelet[3551]: E1123 23:00:25.137187 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:25.137362 kubelet[3551]: E1123 23:00:25.137277 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:25.137941 kubelet[3551]: E1123 23:00:25.137457 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:25.140140 containerd[1993]: time="2025-11-23T23:00:25.140073460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:00:25.422163 containerd[1993]: time="2025-11-23T23:00:25.421867925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:25.424166 containerd[1993]: time="2025-11-23T23:00:25.424093229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:00:25.424512 containerd[1993]: time="2025-11-23T23:00:25.424153169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:25.424825 kubelet[3551]: E1123 23:00:25.424739 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:25.424944 kubelet[3551]: E1123 23:00:25.424831 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:25.425188 kubelet[3551]: E1123 23:00:25.425003 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-dd89954d4-v45np_calico-system(73ce4778-08c3-48f4-84c6-854c5b7e542f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:25.425298 kubelet[3551]: E1123 23:00:25.425220 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 23:00:33.309810 update_engine[1978]: I20251123 23:00:33.309722 1978 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 23 23:00:33.309810 update_engine[1978]: I20251123 23:00:33.309800 1978 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 23 23:00:33.310480 update_engine[1978]: I20251123 23:00:33.310253 1978 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 23 23:00:33.312274 update_engine[1978]: I20251123 23:00:33.312216 1978 omaha_request_params.cc:62] Current group set to beta Nov 23 23:00:33.312653 update_engine[1978]: I20251123 23:00:33.312377 1978 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 23 23:00:33.312653 update_engine[1978]: I20251123 23:00:33.312407 1978 update_attempter.cc:643] Scheduling an action processor start. Nov 23 23:00:33.312653 update_engine[1978]: I20251123 23:00:33.312443 1978 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 23 23:00:33.312653 update_engine[1978]: I20251123 23:00:33.312500 1978 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 23 23:00:33.312653 update_engine[1978]: I20251123 23:00:33.312641 1978 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 23 23:00:33.312928 update_engine[1978]: I20251123 23:00:33.312664 1978 omaha_request_action.cc:272] Request: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: Nov 23 23:00:33.312928 update_engine[1978]: I20251123 23:00:33.312678 1978 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:33.313730 locksmithd[2033]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 23 23:00:33.317786 update_engine[1978]: I20251123 23:00:33.317729 1978 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:33.319261 update_engine[1978]: I20251123 23:00:33.319137 1978 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:33.348045 update_engine[1978]: E20251123 23:00:33.347985 1978 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:33.348306 update_engine[1978]: I20251123 23:00:33.348276 1978 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 23 23:00:34.859260 containerd[1993]: time="2025-11-23T23:00:34.859066684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:00:35.125700 containerd[1993]: time="2025-11-23T23:00:35.125530921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:35.128040 containerd[1993]: time="2025-11-23T23:00:35.127951609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:00:35.128157 containerd[1993]: time="2025-11-23T23:00:35.128112817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:00:35.128477 kubelet[3551]: E1123 23:00:35.128411 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:35.129726 kubelet[3551]: E1123 23:00:35.128487 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:35.129726 kubelet[3551]: E1123 23:00:35.128622 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:35.130167 containerd[1993]: time="2025-11-23T23:00:35.130120693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:00:35.458008 containerd[1993]: time="2025-11-23T23:00:35.457777047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:35.460030 containerd[1993]: time="2025-11-23T23:00:35.459963975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:00:35.460186 containerd[1993]: time="2025-11-23T23:00:35.460096947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:00:35.460489 kubelet[3551]: E1123 23:00:35.460427 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:35.460618 kubelet[3551]: E1123 23:00:35.460499 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:35.460696 kubelet[3551]: E1123 23:00:35.460645 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ssk7t_calico-system(caf53fdf-fed6-43b9-8878-f61f79709f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:35.460819 kubelet[3551]: E1123 23:00:35.460712 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 23:00:35.858580 containerd[1993]: time="2025-11-23T23:00:35.858523085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:00:36.151275 containerd[1993]: time="2025-11-23T23:00:36.151116314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:36.153437 containerd[1993]: time="2025-11-23T23:00:36.153362138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:00:36.153524 containerd[1993]: time="2025-11-23T23:00:36.153485162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:36.154056 kubelet[3551]: E1123 23:00:36.153753 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:36.154056 kubelet[3551]: E1123 23:00:36.153818 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:36.154056 kubelet[3551]: E1123 23:00:36.153952 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f8f8556cf-9rc8j_calico-system(ed04d8fd-f316-436f-a1ef-e581bd3f494a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:36.154056 kubelet[3551]: E1123 23:00:36.154006 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 23:00:36.861978 kubelet[3551]: E1123 23:00:36.861889 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 23:00:36.862435 containerd[1993]: time="2025-11-23T23:00:36.862385838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:36.864808 kubelet[3551]: E1123 23:00:36.864734 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 23:00:37.022373 systemd[1]: cri-containerd-892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb.scope: Deactivated successfully. Nov 23 23:00:37.022983 systemd[1]: cri-containerd-892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb.scope: Consumed 28.902s CPU time, 106.6M memory peak. Nov 23 23:00:37.029852 containerd[1993]: time="2025-11-23T23:00:37.029557491Z" level=info msg="received container exit event container_id:\"892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb\" id:\"892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb\" pid:3878 exit_status:1 exited_at:{seconds:1763938837 nanos:27569103}" Nov 23 23:00:37.050186 systemd[1]: cri-containerd-8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76.scope: Deactivated successfully. Nov 23 23:00:37.050763 systemd[1]: cri-containerd-8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76.scope: Consumed 7.197s CPU time, 59.2M memory peak, 144K read from disk. Nov 23 23:00:37.067224 containerd[1993]: time="2025-11-23T23:00:37.067145115Z" level=info msg="received container exit event container_id:\"8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76\" id:\"8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76\" pid:3202 exit_status:1 exited_at:{seconds:1763938837 nanos:66818319}" Nov 23 23:00:37.114851 containerd[1993]: time="2025-11-23T23:00:37.113750091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:37.117131 containerd[1993]: time="2025-11-23T23:00:37.117047031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:37.118896 containerd[1993]: time="2025-11-23T23:00:37.117187227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:37.117622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb-rootfs.mount: Deactivated successfully. Nov 23 23:00:37.120362 kubelet[3551]: E1123 23:00:37.119230 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:37.120362 kubelet[3551]: E1123 23:00:37.119287 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:37.121035 kubelet[3551]: E1123 23:00:37.119552 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lrpdp_calico-apiserver(8dcbc37e-9145-4538-9ae4-0ee44fb84086): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:37.121931 kubelet[3551]: E1123 23:00:37.121213 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 23:00:37.122109 containerd[1993]: time="2025-11-23T23:00:37.121409031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:37.143433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76-rootfs.mount: Deactivated successfully. Nov 23 23:00:37.358036 containerd[1993]: time="2025-11-23T23:00:37.357859096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:37.362859 containerd[1993]: time="2025-11-23T23:00:37.362698552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:37.362859 containerd[1993]: time="2025-11-23T23:00:37.362780536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:37.363071 kubelet[3551]: E1123 23:00:37.363002 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:37.363654 kubelet[3551]: E1123 23:00:37.363062 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:37.363654 kubelet[3551]: E1123 23:00:37.363162 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-855cfc6487-7qv5g_calico-apiserver(1ce6cd5f-eadc-464f-af0e-bacaebe7e59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:37.363654 kubelet[3551]: E1123 23:00:37.363211 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 23:00:37.716926 kubelet[3551]: I1123 23:00:37.716117 3551 scope.go:117] "RemoveContainer" containerID="892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb" Nov 23 23:00:37.720940 containerd[1993]: time="2025-11-23T23:00:37.720896298Z" level=info msg="CreateContainer within sandbox \"5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 23:00:37.722271 kubelet[3551]: I1123 23:00:37.722220 3551 scope.go:117] "RemoveContainer" containerID="8ce8c92ae8fb277f72216ff0aeaaec8b7aed3d280003e0d334fcaf908ea92b76" Nov 23 23:00:37.725556 containerd[1993]: time="2025-11-23T23:00:37.725506266Z" level=info msg="CreateContainer within sandbox \"cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 23:00:37.740195 containerd[1993]: time="2025-11-23T23:00:37.740132226Z" level=info msg="Container ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:37.751219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549529075.mount: Deactivated successfully. Nov 23 23:00:37.761988 containerd[1993]: time="2025-11-23T23:00:37.761919006Z" level=info msg="Container c35b6905b1562a9ee332fae323b8aec49cc66fd46912ed3fd223fa5203812dfb: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:37.768410 containerd[1993]: time="2025-11-23T23:00:37.768229206Z" level=info msg="CreateContainer within sandbox \"5ef1efc101a0fd6c541d0feb1d9eaa220df0384e3346ac422a08521e527498fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0\"" Nov 23 23:00:37.769349 containerd[1993]: time="2025-11-23T23:00:37.769290918Z" level=info msg="StartContainer for \"ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0\"" Nov 23 23:00:37.770898 containerd[1993]: time="2025-11-23T23:00:37.770835330Z" level=info msg="connecting to shim ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0" address="unix:///run/containerd/s/4190ac2d4d0a0111daf8baa1d37e754228311c3c93a0c4e79b25a8d20255586e" protocol=ttrpc version=3 Nov 23 23:00:37.781136 containerd[1993]: time="2025-11-23T23:00:37.781069974Z" level=info msg="CreateContainer within sandbox \"cff99bde8c1b6009752e3877474fdf0e2c8b9c6dd16d3e06f75bdd3b513c7907\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c35b6905b1562a9ee332fae323b8aec49cc66fd46912ed3fd223fa5203812dfb\"" Nov 23 23:00:37.782226 containerd[1993]: time="2025-11-23T23:00:37.782184378Z" level=info msg="StartContainer for \"c35b6905b1562a9ee332fae323b8aec49cc66fd46912ed3fd223fa5203812dfb\"" Nov 23 23:00:37.785239 containerd[1993]: time="2025-11-23T23:00:37.785156682Z" level=info msg="connecting to shim c35b6905b1562a9ee332fae323b8aec49cc66fd46912ed3fd223fa5203812dfb" address="unix:///run/containerd/s/93250cfb3b06685c57339fc4a7bec5c4e9c4415c04b506dd50b066d260095a96" protocol=ttrpc version=3 Nov 23 23:00:37.815988 systemd[1]: Started cri-containerd-ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0.scope - libcontainer container ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0. Nov 23 23:00:37.839437 systemd[1]: Started cri-containerd-c35b6905b1562a9ee332fae323b8aec49cc66fd46912ed3fd223fa5203812dfb.scope - libcontainer container c35b6905b1562a9ee332fae323b8aec49cc66fd46912ed3fd223fa5203812dfb. Nov 23 23:00:37.922681 containerd[1993]: time="2025-11-23T23:00:37.922633831Z" level=info msg="StartContainer for \"ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0\" returns successfully" Nov 23 23:00:37.965975 containerd[1993]: time="2025-11-23T23:00:37.965798011Z" level=info msg="StartContainer for \"c35b6905b1562a9ee332fae323b8aec49cc66fd46912ed3fd223fa5203812dfb\" returns successfully" Nov 23 23:00:38.862144 containerd[1993]: time="2025-11-23T23:00:38.862081424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:39.123100 containerd[1993]: time="2025-11-23T23:00:39.122937485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:39.125602 containerd[1993]: time="2025-11-23T23:00:39.125498549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:39.125728 containerd[1993]: time="2025-11-23T23:00:39.125654729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:39.126041 kubelet[3551]: E1123 23:00:39.125934 3551 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:39.126524 kubelet[3551]: E1123 23:00:39.126069 3551 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:39.126524 kubelet[3551]: E1123 23:00:39.126289 3551 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-786df89cbb-lh757_calico-apiserver(e89989bc-946c-40b9-a2fe-b6be9daeb141): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:39.126524 kubelet[3551]: E1123 23:00:39.126364 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 23:00:39.404259 kubelet[3551]: E1123 23:00:39.404088 3551 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-18?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 23:00:41.097366 systemd[1]: cri-containerd-6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed.scope: Deactivated successfully. Nov 23 23:00:41.098194 systemd[1]: cri-containerd-6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed.scope: Consumed 3.520s CPU time, 23M memory peak. Nov 23 23:00:41.104780 containerd[1993]: time="2025-11-23T23:00:41.104560807Z" level=info msg="received container exit event container_id:\"6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed\" id:\"6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed\" pid:3210 exit_status:1 exited_at:{seconds:1763938841 nanos:103431103}" Nov 23 23:00:41.157045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed-rootfs.mount: Deactivated successfully. Nov 23 23:00:41.755993 kubelet[3551]: I1123 23:00:41.755943 3551 scope.go:117] "RemoveContainer" containerID="6dc7aa67f09d225e29a8bab16a0a208debe171d3ca66b29b555af20ec20496ed" Nov 23 23:00:41.761942 containerd[1993]: time="2025-11-23T23:00:41.761893006Z" level=info msg="CreateContainer within sandbox \"dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 23 23:00:41.787644 containerd[1993]: time="2025-11-23T23:00:41.785868490Z" level=info msg="Container 4d3398a6c2d9db80dd10b72eaaa16452c31be6329178f2f16f7db13439454d6d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:41.814297 containerd[1993]: time="2025-11-23T23:00:41.814238314Z" level=info msg="CreateContainer within sandbox \"dab19b24e7cf166d59914a0a2b49c862e4ddd144bc99b60c4b4404d57f603ad7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4d3398a6c2d9db80dd10b72eaaa16452c31be6329178f2f16f7db13439454d6d\"" Nov 23 23:00:41.816659 containerd[1993]: time="2025-11-23T23:00:41.815412262Z" level=info msg="StartContainer for \"4d3398a6c2d9db80dd10b72eaaa16452c31be6329178f2f16f7db13439454d6d\"" Nov 23 23:00:41.817952 containerd[1993]: time="2025-11-23T23:00:41.817842418Z" level=info msg="connecting to shim 4d3398a6c2d9db80dd10b72eaaa16452c31be6329178f2f16f7db13439454d6d" address="unix:///run/containerd/s/a505a269beec9df41ab994e475b7fcc45c6f4ea063af27fdfec761bd3959b464" protocol=ttrpc version=3 Nov 23 23:00:41.887929 systemd[1]: Started cri-containerd-4d3398a6c2d9db80dd10b72eaaa16452c31be6329178f2f16f7db13439454d6d.scope - libcontainer container 4d3398a6c2d9db80dd10b72eaaa16452c31be6329178f2f16f7db13439454d6d. Nov 23 23:00:41.992836 containerd[1993]: time="2025-11-23T23:00:41.992774279Z" level=info msg="StartContainer for \"4d3398a6c2d9db80dd10b72eaaa16452c31be6329178f2f16f7db13439454d6d\" returns successfully" Nov 23 23:00:43.307719 update_engine[1978]: I20251123 23:00:43.307622 1978 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:43.308329 update_engine[1978]: I20251123 23:00:43.307746 1978 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:43.308329 update_engine[1978]: I20251123 23:00:43.308274 1978 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:43.309839 update_engine[1978]: E20251123 23:00:43.309686 1978 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:43.309839 update_engine[1978]: I20251123 23:00:43.309796 1978 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 23 23:00:47.858600 kubelet[3551]: E1123 23:00:47.858414 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lrpdp" podUID="8dcbc37e-9145-4538-9ae4-0ee44fb84086" Nov 23 23:00:47.859883 kubelet[3551]: E1123 23:00:47.859792 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ssk7t" podUID="caf53fdf-fed6-43b9-8878-f61f79709f6c" Nov 23 23:00:48.858026 kubelet[3551]: E1123 23:00:48.857879 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f8f8556cf-9rc8j" podUID="ed04d8fd-f316-436f-a1ef-e581bd3f494a" Nov 23 23:00:49.405085 kubelet[3551]: E1123 23:00:49.404740 3551 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-18?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 23:00:49.528265 systemd[1]: cri-containerd-ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0.scope: Deactivated successfully. Nov 23 23:00:49.531851 containerd[1993]: time="2025-11-23T23:00:49.531480101Z" level=info msg="received container exit event container_id:\"ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0\" id:\"ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0\" pid:6116 exit_status:1 exited_at:{seconds:1763938849 nanos:531109745}" Nov 23 23:00:49.575153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0-rootfs.mount: Deactivated successfully. Nov 23 23:00:49.793562 kubelet[3551]: I1123 23:00:49.792997 3551 scope.go:117] "RemoveContainer" containerID="892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb" Nov 23 23:00:49.793859 kubelet[3551]: I1123 23:00:49.793802 3551 scope.go:117] "RemoveContainer" containerID="ec9e040f3b7cc2a11da373793cd1140bc1dee8c4fab036b84c0b15323400dcb0" Nov 23 23:00:49.794280 kubelet[3551]: E1123 23:00:49.794195 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-bmnh8_tigera-operator(a1946201-d01e-494d-b4b6-2663716e7c01)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-bmnh8" podUID="a1946201-d01e-494d-b4b6-2663716e7c01" Nov 23 23:00:49.797940 containerd[1993]: time="2025-11-23T23:00:49.797547042Z" level=info msg="RemoveContainer for \"892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb\"" Nov 23 23:00:49.808218 containerd[1993]: time="2025-11-23T23:00:49.808107174Z" level=info msg="RemoveContainer for \"892db5c1db1dd14050157abadf8ae310c0c8cde72d0084f87779ba918e5b7bfb\" returns successfully" Nov 23 23:00:49.858830 kubelet[3551]: E1123 23:00:49.858737 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855cfc6487-7qv5g" podUID="1ce6cd5f-eadc-464f-af0e-bacaebe7e59a" Nov 23 23:00:50.859555 kubelet[3551]: E1123 23:00:50.859296 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-786df89cbb-lh757" podUID="e89989bc-946c-40b9-a2fe-b6be9daeb141" Nov 23 23:00:50.859555 kubelet[3551]: E1123 23:00:50.859495 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-8jgv8" podUID="bb092924-5640-4734-8a43-16aa063b77ae" Nov 23 23:00:50.861988 kubelet[3551]: E1123 23:00:50.861767 3551 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dd89954d4-v45np" podUID="73ce4778-08c3-48f4-84c6-854c5b7e542f" Nov 23 23:00:53.309670 update_engine[1978]: I20251123 23:00:53.309083 1978 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:53.309670 update_engine[1978]: I20251123 23:00:53.309191 1978 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:53.310438 update_engine[1978]: I20251123 23:00:53.309806 1978 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:53.311036 update_engine[1978]: E20251123 23:00:53.310969 1978 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:53.311119 update_engine[1978]: I20251123 23:00:53.311097 1978 libcurl_http_fetcher.cc:283] No HTTP response, retry 3