Nov 5 23:43:05.139785 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 5 23:43:05.139829 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Nov 5 22:12:41 -00 2025 Nov 5 23:43:05.139854 kernel: KASLR disabled due to lack of seed Nov 5 23:43:05.139870 kernel: efi: EFI v2.7 by EDK II Nov 5 23:43:05.139885 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Nov 5 23:43:05.139900 kernel: secureboot: Secure boot disabled Nov 5 23:43:05.139917 kernel: ACPI: Early table checksum verification disabled Nov 5 23:43:05.139932 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 5 23:43:05.139947 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 5 23:43:05.139962 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 5 23:43:05.139978 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 5 23:43:05.139997 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 5 23:43:05.140012 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 5 23:43:05.140027 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 5 23:43:05.140055 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 5 23:43:05.140078 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 5 23:43:05.140100 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 5 23:43:05.140117 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 5 23:43:05.140133 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 5 23:43:05.140149 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 5 23:43:05.140165 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 5 23:43:05.140180 kernel: printk: legacy bootconsole [uart0] enabled Nov 5 23:43:05.140196 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 23:43:05.140212 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 5 23:43:05.140228 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 5 23:43:05.140244 kernel: Zone ranges: Nov 5 23:43:05.140259 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 5 23:43:05.140279 kernel: DMA32 empty Nov 5 23:43:05.140295 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 5 23:43:05.140310 kernel: Device empty Nov 5 23:43:05.140326 kernel: Movable zone start for each node Nov 5 23:43:05.140341 kernel: Early memory node ranges Nov 5 23:43:05.140357 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 5 23:43:05.140372 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 5 23:43:05.140388 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 5 23:43:05.140403 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 5 23:43:05.140419 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 5 23:43:05.140434 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 5 23:43:05.140450 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 5 23:43:05.140470 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 5 23:43:05.140493 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 5 23:43:05.140510 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 5 23:43:05.140556 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 5 23:43:05.140574 kernel: psci: probing for conduit method from ACPI. Nov 5 23:43:05.140598 kernel: psci: PSCIv1.0 detected in firmware. Nov 5 23:43:05.140615 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 23:43:05.140632 kernel: psci: Trusted OS migration not required Nov 5 23:43:05.140649 kernel: psci: SMC Calling Convention v1.1 Nov 5 23:43:05.140666 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 5 23:43:05.140683 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 23:43:05.140700 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 23:43:05.140717 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 5 23:43:05.140734 kernel: Detected PIPT I-cache on CPU0 Nov 5 23:43:05.140750 kernel: CPU features: detected: GIC system register CPU interface Nov 5 23:43:05.140767 kernel: CPU features: detected: Spectre-v2 Nov 5 23:43:05.140787 kernel: CPU features: detected: Spectre-v3a Nov 5 23:43:05.140804 kernel: CPU features: detected: Spectre-BHB Nov 5 23:43:05.140821 kernel: CPU features: detected: ARM erratum 1742098 Nov 5 23:43:05.140837 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 5 23:43:05.140853 kernel: alternatives: applying boot alternatives Nov 5 23:43:05.140874 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:43:05.140893 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 23:43:05.140910 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 23:43:05.140927 kernel: Fallback order for Node 0: 0 Nov 5 23:43:05.140943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 5 23:43:05.140960 kernel: Policy zone: Normal Nov 5 23:43:05.140982 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 23:43:05.140998 kernel: software IO TLB: area num 2. Nov 5 23:43:05.141015 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Nov 5 23:43:05.141031 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 23:43:05.141048 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 23:43:05.141065 kernel: rcu: RCU event tracing is enabled. Nov 5 23:43:05.141083 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 23:43:05.141100 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 23:43:05.141118 kernel: Tracing variant of Tasks RCU enabled. Nov 5 23:43:05.141134 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 23:43:05.141151 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 23:43:05.141172 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 23:43:05.141189 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 23:43:05.141205 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 23:43:05.141222 kernel: GICv3: 96 SPIs implemented Nov 5 23:43:05.141238 kernel: GICv3: 0 Extended SPIs implemented Nov 5 23:43:05.141255 kernel: Root IRQ handler: gic_handle_irq Nov 5 23:43:05.141271 kernel: GICv3: GICv3 features: 16 PPIs Nov 5 23:43:05.141288 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 23:43:05.141304 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 5 23:43:05.141320 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 5 23:43:05.141337 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 5 23:43:05.141355 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 5 23:43:05.141376 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 5 23:43:05.141393 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 5 23:43:05.141409 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 5 23:43:05.141426 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 23:43:05.141443 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 5 23:43:05.141459 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 5 23:43:05.141476 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 5 23:43:05.141494 kernel: Console: colour dummy device 80x25 Nov 5 23:43:05.141511 kernel: printk: legacy console [tty1] enabled Nov 5 23:43:05.141577 kernel: ACPI: Core revision 20240827 Nov 5 23:43:05.141605 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 5 23:43:05.141622 kernel: pid_max: default: 32768 minimum: 301 Nov 5 23:43:05.141640 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 23:43:05.141657 kernel: landlock: Up and running. Nov 5 23:43:05.141673 kernel: SELinux: Initializing. Nov 5 23:43:05.141690 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:43:05.141707 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 23:43:05.141724 kernel: rcu: Hierarchical SRCU implementation. Nov 5 23:43:05.141741 kernel: rcu: Max phase no-delay instances is 400. Nov 5 23:43:05.141761 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 23:43:05.141779 kernel: Remapping and enabling EFI services. Nov 5 23:43:05.141795 kernel: smp: Bringing up secondary CPUs ... Nov 5 23:43:05.141812 kernel: Detected PIPT I-cache on CPU1 Nov 5 23:43:05.141829 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 5 23:43:05.141845 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 5 23:43:05.141862 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 5 23:43:05.141879 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 23:43:05.141896 kernel: SMP: Total of 2 processors activated. Nov 5 23:43:05.141917 kernel: CPU: All CPU(s) started at EL1 Nov 5 23:43:05.141946 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 23:43:05.141964 kernel: CPU features: detected: 32-bit EL1 Support Nov 5 23:43:05.141985 kernel: CPU features: detected: CRC32 instructions Nov 5 23:43:05.142002 kernel: alternatives: applying system-wide alternatives Nov 5 23:43:05.142021 kernel: Memory: 3796972K/4030464K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 212148K reserved, 16384K cma-reserved) Nov 5 23:43:05.142039 kernel: devtmpfs: initialized Nov 5 23:43:05.142057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 23:43:05.142080 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 23:43:05.142098 kernel: 17040 pages in range for non-PLT usage Nov 5 23:43:05.142116 kernel: 508560 pages in range for PLT usage Nov 5 23:43:05.142134 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 23:43:05.142151 kernel: SMBIOS 3.0.0 present. Nov 5 23:43:05.142169 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 5 23:43:05.142186 kernel: DMI: Memory slots populated: 0/0 Nov 5 23:43:05.142203 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 23:43:05.142221 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 23:43:05.142243 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 23:43:05.142261 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 23:43:05.142278 kernel: audit: initializing netlink subsys (disabled) Nov 5 23:43:05.142296 kernel: audit: type=2000 audit(0.244:1): state=initialized audit_enabled=0 res=1 Nov 5 23:43:05.142313 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 23:43:05.142331 kernel: cpuidle: using governor menu Nov 5 23:43:05.142349 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 23:43:05.142366 kernel: ASID allocator initialised with 65536 entries Nov 5 23:43:05.142384 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 23:43:05.142406 kernel: Serial: AMBA PL011 UART driver Nov 5 23:43:05.142424 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 23:43:05.142442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 23:43:05.142460 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 23:43:05.142478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 23:43:05.142495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 23:43:05.142538 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 23:43:05.142564 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 23:43:05.142583 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 23:43:05.142609 kernel: ACPI: Added _OSI(Module Device) Nov 5 23:43:05.142627 kernel: ACPI: Added _OSI(Processor Device) Nov 5 23:43:05.142645 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 23:43:05.142663 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 23:43:05.142681 kernel: ACPI: Interpreter enabled Nov 5 23:43:05.142699 kernel: ACPI: Using GIC for interrupt routing Nov 5 23:43:05.142717 kernel: ACPI: MCFG table detected, 1 entries Nov 5 23:43:05.142735 kernel: ACPI: CPU0 has been hot-added Nov 5 23:43:05.142753 kernel: ACPI: CPU1 has been hot-added Nov 5 23:43:05.142775 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 5 23:43:05.143085 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 23:43:05.143772 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 23:43:05.143957 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 23:43:05.144157 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 5 23:43:05.144335 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 5 23:43:05.144360 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 5 23:43:05.144388 kernel: acpiphp: Slot [1] registered Nov 5 23:43:05.144406 kernel: acpiphp: Slot [2] registered Nov 5 23:43:05.144424 kernel: acpiphp: Slot [3] registered Nov 5 23:43:05.144442 kernel: acpiphp: Slot [4] registered Nov 5 23:43:05.144459 kernel: acpiphp: Slot [5] registered Nov 5 23:43:05.144477 kernel: acpiphp: Slot [6] registered Nov 5 23:43:05.144495 kernel: acpiphp: Slot [7] registered Nov 5 23:43:05.144512 kernel: acpiphp: Slot [8] registered Nov 5 23:43:05.144897 kernel: acpiphp: Slot [9] registered Nov 5 23:43:05.144922 kernel: acpiphp: Slot [10] registered Nov 5 23:43:05.144940 kernel: acpiphp: Slot [11] registered Nov 5 23:43:05.144958 kernel: acpiphp: Slot [12] registered Nov 5 23:43:05.144975 kernel: acpiphp: Slot [13] registered Nov 5 23:43:05.144993 kernel: acpiphp: Slot [14] registered Nov 5 23:43:05.145010 kernel: acpiphp: Slot [15] registered Nov 5 23:43:05.145027 kernel: acpiphp: Slot [16] registered Nov 5 23:43:05.145046 kernel: acpiphp: Slot [17] registered Nov 5 23:43:05.145064 kernel: acpiphp: Slot [18] registered Nov 5 23:43:05.145082 kernel: acpiphp: Slot [19] registered Nov 5 23:43:05.145105 kernel: acpiphp: Slot [20] registered Nov 5 23:43:05.145124 kernel: acpiphp: Slot [21] registered Nov 5 23:43:05.145141 kernel: acpiphp: Slot [22] registered Nov 5 23:43:05.145158 kernel: acpiphp: Slot [23] registered Nov 5 23:43:05.145176 kernel: acpiphp: Slot [24] registered Nov 5 23:43:05.145193 kernel: acpiphp: Slot [25] registered Nov 5 23:43:05.145211 kernel: acpiphp: Slot [26] registered Nov 5 23:43:05.145229 kernel: acpiphp: Slot [27] registered Nov 5 23:43:05.145247 kernel: acpiphp: Slot [28] registered Nov 5 23:43:05.145268 kernel: acpiphp: Slot [29] registered Nov 5 23:43:05.145286 kernel: acpiphp: Slot [30] registered Nov 5 23:43:05.145304 kernel: acpiphp: Slot [31] registered Nov 5 23:43:05.145321 kernel: PCI host bridge to bus 0000:00 Nov 5 23:43:05.146325 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 5 23:43:05.146596 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 23:43:05.146777 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 5 23:43:05.146945 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 5 23:43:05.147186 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 5 23:43:05.147412 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 5 23:43:05.147681 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 5 23:43:05.147895 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 5 23:43:05.148097 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 5 23:43:05.148286 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 23:43:05.148491 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 5 23:43:05.148842 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 5 23:43:05.149033 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 5 23:43:05.149227 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 5 23:43:05.149413 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 23:43:05.149631 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Nov 5 23:43:05.149816 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Nov 5 23:43:05.150009 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Nov 5 23:43:05.150194 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Nov 5 23:43:05.150388 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Nov 5 23:43:05.150652 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 5 23:43:05.150871 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 23:43:05.151069 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 5 23:43:05.151101 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 23:43:05.151133 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 23:43:05.151305 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 23:43:05.151328 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 23:43:05.151347 kernel: iommu: Default domain type: Translated Nov 5 23:43:05.151366 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 23:43:05.151386 kernel: efivars: Registered efivars operations Nov 5 23:43:05.151405 kernel: vgaarb: loaded Nov 5 23:43:05.151424 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 23:43:05.151443 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 23:43:05.151471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 23:43:05.151491 kernel: pnp: PnP ACPI init Nov 5 23:43:05.151845 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 5 23:43:05.151886 kernel: pnp: PnP ACPI: found 1 devices Nov 5 23:43:05.151906 kernel: NET: Registered PF_INET protocol family Nov 5 23:43:05.151926 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 23:43:05.151947 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 23:43:05.151966 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 23:43:05.151985 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 23:43:05.152014 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 23:43:05.152033 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 23:43:05.152052 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:43:05.152072 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 23:43:05.152092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 23:43:05.152111 kernel: PCI: CLS 0 bytes, default 64 Nov 5 23:43:05.152131 kernel: kvm [1]: HYP mode not available Nov 5 23:43:05.152148 kernel: Initialise system trusted keyrings Nov 5 23:43:05.152166 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 23:43:05.152189 kernel: Key type asymmetric registered Nov 5 23:43:05.152208 kernel: Asymmetric key parser 'x509' registered Nov 5 23:43:05.152226 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 23:43:05.152245 kernel: io scheduler mq-deadline registered Nov 5 23:43:05.152265 kernel: io scheduler kyber registered Nov 5 23:43:05.152284 kernel: io scheduler bfq registered Nov 5 23:43:05.152574 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 5 23:43:05.152612 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 23:43:05.152640 kernel: ACPI: button: Power Button [PWRB] Nov 5 23:43:05.152660 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 5 23:43:05.152680 kernel: ACPI: button: Sleep Button [SLPB] Nov 5 23:43:05.152700 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 23:43:05.152721 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 5 23:43:05.152944 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 5 23:43:05.152974 kernel: printk: legacy console [ttyS0] disabled Nov 5 23:43:05.152995 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 5 23:43:05.153014 kernel: printk: legacy console [ttyS0] enabled Nov 5 23:43:05.153040 kernel: printk: legacy bootconsole [uart0] disabled Nov 5 23:43:05.153060 kernel: thunder_xcv, ver 1.0 Nov 5 23:43:05.153081 kernel: thunder_bgx, ver 1.0 Nov 5 23:43:05.153099 kernel: nicpf, ver 1.0 Nov 5 23:43:05.153118 kernel: nicvf, ver 1.0 Nov 5 23:43:05.153351 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 23:43:05.153575 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T23:43:04 UTC (1762386184) Nov 5 23:43:05.153607 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 23:43:05.153637 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 5 23:43:05.153656 kernel: NET: Registered PF_INET6 protocol family Nov 5 23:43:05.153674 kernel: watchdog: NMI not fully supported Nov 5 23:43:05.153693 kernel: watchdog: Hard watchdog permanently disabled Nov 5 23:43:05.153710 kernel: Segment Routing with IPv6 Nov 5 23:43:05.153728 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 23:43:05.153746 kernel: NET: Registered PF_PACKET protocol family Nov 5 23:43:05.153764 kernel: Key type dns_resolver registered Nov 5 23:43:05.153783 kernel: registered taskstats version 1 Nov 5 23:43:05.153806 kernel: Loading compiled-in X.509 certificates Nov 5 23:43:05.153826 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9d5732f5af196e4cfd06fc38e62e061c2a702dfd' Nov 5 23:43:05.153844 kernel: Demotion targets for Node 0: null Nov 5 23:43:05.153864 kernel: Key type .fscrypt registered Nov 5 23:43:05.153884 kernel: Key type fscrypt-provisioning registered Nov 5 23:43:05.153902 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 23:43:05.153920 kernel: ima: Allocated hash algorithm: sha1 Nov 5 23:43:05.153939 kernel: ima: No architecture policies found Nov 5 23:43:05.153958 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 23:43:05.153980 kernel: clk: Disabling unused clocks Nov 5 23:43:05.153999 kernel: PM: genpd: Disabling unused power domains Nov 5 23:43:05.154019 kernel: Warning: unable to open an initial console. Nov 5 23:43:05.154038 kernel: Freeing unused kernel memory: 38976K Nov 5 23:43:05.154060 kernel: Run /init as init process Nov 5 23:43:05.154080 kernel: with arguments: Nov 5 23:43:05.154103 kernel: /init Nov 5 23:43:05.154123 kernel: with environment: Nov 5 23:43:05.154142 kernel: HOME=/ Nov 5 23:43:05.154170 kernel: TERM=linux Nov 5 23:43:05.154193 systemd[1]: Successfully made /usr/ read-only. Nov 5 23:43:05.154221 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:43:05.154244 systemd[1]: Detected virtualization amazon. Nov 5 23:43:05.154265 systemd[1]: Detected architecture arm64. Nov 5 23:43:05.154285 systemd[1]: Running in initrd. Nov 5 23:43:05.154306 systemd[1]: No hostname configured, using default hostname. Nov 5 23:43:05.154334 systemd[1]: Hostname set to . Nov 5 23:43:05.154354 systemd[1]: Initializing machine ID from VM UUID. Nov 5 23:43:05.154374 systemd[1]: Queued start job for default target initrd.target. Nov 5 23:43:05.154393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:43:05.154413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:43:05.154436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 23:43:05.154458 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:43:05.154480 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 23:43:05.154507 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 23:43:05.154580 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 5 23:43:05.154602 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 5 23:43:05.154624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:43:05.154645 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:43:05.154665 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:43:05.154686 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:43:05.154715 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:43:05.154735 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:43:05.154754 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:43:05.154773 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:43:05.154793 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 23:43:05.154814 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 23:43:05.154834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:43:05.154855 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:43:05.154875 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:43:05.154900 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:43:05.154920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 23:43:05.154941 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:43:05.154960 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 23:43:05.154980 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 23:43:05.155001 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 23:43:05.155021 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:43:05.155041 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:43:05.155068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:43:05.155088 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 23:43:05.155111 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:43:05.155132 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 23:43:05.155152 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 23:43:05.155245 systemd-journald[259]: Collecting audit messages is disabled. Nov 5 23:43:05.155293 systemd-journald[259]: Journal started Nov 5 23:43:05.155336 systemd-journald[259]: Runtime Journal (/run/log/journal/ec259ef45d7b6ee109de856ed9742c43) is 8M, max 75.3M, 67.3M free. Nov 5 23:43:05.119683 systemd-modules-load[260]: Inserted module 'overlay' Nov 5 23:43:05.171237 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:43:05.171355 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 23:43:05.174619 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:43:05.182787 kernel: Bridge firewalling registered Nov 5 23:43:05.175126 systemd-modules-load[260]: Inserted module 'br_netfilter' Nov 5 23:43:05.185584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:43:05.188057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:43:05.194742 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 23:43:05.203782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:43:05.226952 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:43:05.239047 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:43:05.258638 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:43:05.274815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:43:05.289863 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 23:43:05.299251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:43:05.306752 systemd-tmpfiles[285]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 23:43:05.327393 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:43:05.338819 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:43:05.370698 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=daaa5e51b65832b359eb98eae08cea627c611d87c128e20a83873de5c8d1aca5 Nov 5 23:43:05.443615 systemd-resolved[303]: Positive Trust Anchors: Nov 5 23:43:05.443665 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:43:05.443730 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:43:05.550573 kernel: SCSI subsystem initialized Nov 5 23:43:05.558576 kernel: Loading iSCSI transport class v2.0-870. Nov 5 23:43:05.572582 kernel: iscsi: registered transport (tcp) Nov 5 23:43:05.594732 kernel: iscsi: registered transport (qla4xxx) Nov 5 23:43:05.594818 kernel: QLogic iSCSI HBA Driver Nov 5 23:43:05.634781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:43:05.681469 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:43:05.696036 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:43:05.716586 kernel: random: crng init done Nov 5 23:43:05.716116 systemd-resolved[303]: Defaulting to hostname 'linux'. Nov 5 23:43:05.721115 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:43:05.730220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:43:05.806627 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 23:43:05.814213 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 23:43:05.905582 kernel: raid6: neonx8 gen() 6479 MB/s Nov 5 23:43:05.923566 kernel: raid6: neonx4 gen() 6476 MB/s Nov 5 23:43:05.940568 kernel: raid6: neonx2 gen() 5418 MB/s Nov 5 23:43:05.957570 kernel: raid6: neonx1 gen() 3927 MB/s Nov 5 23:43:05.974569 kernel: raid6: int64x8 gen() 3616 MB/s Nov 5 23:43:05.992562 kernel: raid6: int64x4 gen() 3663 MB/s Nov 5 23:43:06.010562 kernel: raid6: int64x2 gen() 3584 MB/s Nov 5 23:43:06.028848 kernel: raid6: int64x1 gen() 2734 MB/s Nov 5 23:43:06.028921 kernel: raid6: using algorithm neonx8 gen() 6479 MB/s Nov 5 23:43:06.048013 kernel: raid6: .... xor() 4686 MB/s, rmw enabled Nov 5 23:43:06.048089 kernel: raid6: using neon recovery algorithm Nov 5 23:43:06.057200 kernel: xor: measuring software checksum speed Nov 5 23:43:06.057271 kernel: 8regs : 12593 MB/sec Nov 5 23:43:06.058577 kernel: 32regs : 13019 MB/sec Nov 5 23:43:06.059994 kernel: arm64_neon : 8921 MB/sec Nov 5 23:43:06.060037 kernel: xor: using function: 32regs (13019 MB/sec) Nov 5 23:43:06.154572 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 23:43:06.166474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:43:06.174265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:43:06.229946 systemd-udevd[511]: Using default interface naming scheme 'v255'. Nov 5 23:43:06.242843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:43:06.251826 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 23:43:06.303991 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Nov 5 23:43:06.353008 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:43:06.360970 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:43:06.493495 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:43:06.502777 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 23:43:06.653620 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 23:43:06.653717 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 5 23:43:06.672628 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 5 23:43:06.673115 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 5 23:43:06.673149 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 5 23:43:06.676807 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 5 23:43:06.690576 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 5 23:43:06.695615 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:0f:6c:f9:bb:6f Nov 5 23:43:06.700936 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 23:43:06.701010 kernel: GPT:9289727 != 33554431 Nov 5 23:43:06.703462 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 23:43:06.705094 kernel: GPT:9289727 != 33554431 Nov 5 23:43:06.707283 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 23:43:06.709168 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 23:43:06.719260 (udev-worker)[564]: Network interface NamePolicy= disabled on kernel command line. Nov 5 23:43:06.746281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:43:06.749245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:43:06.754701 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:43:06.760700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:43:06.766601 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:43:06.810613 kernel: nvme nvme0: using unchecked data buffer Nov 5 23:43:06.814572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:43:06.976129 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 5 23:43:07.034486 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 5 23:43:07.042433 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 5 23:43:07.049379 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 23:43:07.093445 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 5 23:43:07.115464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 23:43:07.122093 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:43:07.124962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:43:07.127719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:43:07.136478 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 23:43:07.144949 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 23:43:07.176656 disk-uuid[691]: Primary Header is updated. Nov 5 23:43:07.176656 disk-uuid[691]: Secondary Entries is updated. Nov 5 23:43:07.176656 disk-uuid[691]: Secondary Header is updated. Nov 5 23:43:07.190645 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 23:43:07.201628 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:43:08.209921 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 23:43:08.212603 disk-uuid[692]: The operation has completed successfully. Nov 5 23:43:08.415006 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 23:43:08.415620 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 23:43:08.503333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 5 23:43:08.542831 sh[959]: Success Nov 5 23:43:08.573061 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 23:43:08.573138 kernel: device-mapper: uevent: version 1.0.3 Nov 5 23:43:08.575554 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 23:43:08.588702 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 23:43:08.697030 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 5 23:43:08.710679 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 5 23:43:08.724161 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 5 23:43:08.747584 kernel: BTRFS: device fsid 223300c7-37a4-4131-896a-4d331c3aa134 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (982) Nov 5 23:43:08.753114 kernel: BTRFS info (device dm-0): first mount of filesystem 223300c7-37a4-4131-896a-4d331c3aa134 Nov 5 23:43:08.753222 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:43:08.901120 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 23:43:08.901212 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 23:43:08.902848 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 23:43:08.929018 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 5 23:43:08.934268 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:43:08.940166 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 23:43:08.946301 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 23:43:08.954750 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 23:43:09.012574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1013) Nov 5 23:43:09.017960 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:43:09.018051 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:43:09.037035 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 23:43:09.037111 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 23:43:09.046631 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:43:09.049174 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 23:43:09.064797 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 23:43:09.163951 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:43:09.173111 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:43:09.267167 systemd-networkd[1151]: lo: Link UP Nov 5 23:43:09.269289 systemd-networkd[1151]: lo: Gained carrier Nov 5 23:43:09.273936 systemd-networkd[1151]: Enumeration completed Nov 5 23:43:09.274678 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:43:09.279950 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:43:09.279958 systemd-networkd[1151]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:43:09.282301 systemd[1]: Reached target network.target - Network. Nov 5 23:43:09.298820 systemd-networkd[1151]: eth0: Link UP Nov 5 23:43:09.298842 systemd-networkd[1151]: eth0: Gained carrier Nov 5 23:43:09.298866 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:43:09.319667 systemd-networkd[1151]: eth0: DHCPv4 address 172.31.26.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 23:43:09.744787 ignition[1078]: Ignition 2.22.0 Nov 5 23:43:09.744820 ignition[1078]: Stage: fetch-offline Nov 5 23:43:09.748749 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:43:09.748792 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 23:43:09.754240 ignition[1078]: Ignition finished successfully Nov 5 23:43:09.757564 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:43:09.765308 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 23:43:09.818296 ignition[1164]: Ignition 2.22.0 Nov 5 23:43:09.818885 ignition[1164]: Stage: fetch Nov 5 23:43:09.819470 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:43:09.819495 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 23:43:09.819690 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 23:43:09.844946 ignition[1164]: PUT result: OK Nov 5 23:43:09.850662 ignition[1164]: parsed url from cmdline: "" Nov 5 23:43:09.850689 ignition[1164]: no config URL provided Nov 5 23:43:09.850707 ignition[1164]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 23:43:09.850737 ignition[1164]: no config at "/usr/lib/ignition/user.ign" Nov 5 23:43:09.850779 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 23:43:09.855555 ignition[1164]: PUT result: OK Nov 5 23:43:09.855683 ignition[1164]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 5 23:43:09.862167 ignition[1164]: GET result: OK Nov 5 23:43:09.862383 ignition[1164]: parsing config with SHA512: fa0000167571279a7969e7f10262b83cf8d7bac7f53de485147e474ce30d29eb2700c5126fae959b00fd99d8245579a0ba5ef69a665ab9a47a7212620e694667 Nov 5 23:43:09.877401 unknown[1164]: fetched base config from "system" Nov 5 23:43:09.879214 unknown[1164]: fetched base config from "system" Nov 5 23:43:09.879688 unknown[1164]: fetched user config from "aws" Nov 5 23:43:09.885459 ignition[1164]: fetch: fetch complete Nov 5 23:43:09.885485 ignition[1164]: fetch: fetch passed Nov 5 23:43:09.885613 ignition[1164]: Ignition finished successfully Nov 5 23:43:09.891306 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 23:43:09.898716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 23:43:09.947870 ignition[1170]: Ignition 2.22.0 Nov 5 23:43:09.948393 ignition[1170]: Stage: kargs Nov 5 23:43:09.949053 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:43:09.949079 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 23:43:09.949246 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 23:43:09.959413 ignition[1170]: PUT result: OK Nov 5 23:43:09.964164 ignition[1170]: kargs: kargs passed Nov 5 23:43:09.964311 ignition[1170]: Ignition finished successfully Nov 5 23:43:09.969425 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 23:43:09.976127 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 23:43:10.029578 ignition[1176]: Ignition 2.22.0 Nov 5 23:43:10.030073 ignition[1176]: Stage: disks Nov 5 23:43:10.030636 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Nov 5 23:43:10.030658 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 23:43:10.030791 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 23:43:10.035474 ignition[1176]: PUT result: OK Nov 5 23:43:10.050086 ignition[1176]: disks: disks passed Nov 5 23:43:10.050420 ignition[1176]: Ignition finished successfully Nov 5 23:43:10.056601 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 23:43:10.057117 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 23:43:10.063803 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 23:43:10.071693 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:43:10.074070 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:43:10.081116 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:43:10.086641 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 23:43:10.153837 systemd-fsck[1185]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 5 23:43:10.161655 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 23:43:10.169087 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 23:43:10.300572 kernel: EXT4-fs (nvme0n1p9): mounted filesystem de3d89fd-ab21-4d05-b3c1-f0d3e7ce9725 r/w with ordered data mode. Quota mode: none. Nov 5 23:43:10.302702 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 23:43:10.307374 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 23:43:10.312915 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:43:10.321655 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 23:43:10.327571 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 23:43:10.327677 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 23:43:10.327733 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:43:10.358401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 23:43:10.365114 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 23:43:10.376550 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1204) Nov 5 23:43:10.381675 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:43:10.381769 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:43:10.391260 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 23:43:10.391342 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 23:43:10.394449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:43:10.627453 initrd-setup-root[1228]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 23:43:10.639110 initrd-setup-root[1235]: cut: /sysroot/etc/group: No such file or directory Nov 5 23:43:10.647565 initrd-setup-root[1242]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 23:43:10.657844 initrd-setup-root[1249]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 23:43:10.891391 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 23:43:10.899800 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 23:43:10.904679 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 23:43:10.931722 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 23:43:10.935060 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:43:10.968622 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 23:43:10.991263 ignition[1317]: INFO : Ignition 2.22.0 Nov 5 23:43:10.991263 ignition[1317]: INFO : Stage: mount Nov 5 23:43:10.996213 ignition[1317]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:43:10.996213 ignition[1317]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 23:43:10.996213 ignition[1317]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 23:43:10.996213 ignition[1317]: INFO : PUT result: OK Nov 5 23:43:11.011623 ignition[1317]: INFO : mount: mount passed Nov 5 23:43:11.011623 ignition[1317]: INFO : Ignition finished successfully Nov 5 23:43:11.016135 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 23:43:11.023091 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 23:43:11.307506 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 23:43:11.343560 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1328) Nov 5 23:43:11.347893 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7724fea6-57ae-4252-b021-4aac39807031 Nov 5 23:43:11.347951 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 23:43:11.355695 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 23:43:11.355787 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 23:43:11.359332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 23:43:11.372763 systemd-networkd[1151]: eth0: Gained IPv6LL Nov 5 23:43:11.418748 ignition[1345]: INFO : Ignition 2.22.0 Nov 5 23:43:11.418748 ignition[1345]: INFO : Stage: files Nov 5 23:43:11.424019 ignition[1345]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:43:11.424019 ignition[1345]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 23:43:11.424019 ignition[1345]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 23:43:11.424019 ignition[1345]: INFO : PUT result: OK Nov 5 23:43:11.437244 ignition[1345]: DEBUG : files: compiled without relabeling support, skipping Nov 5 23:43:11.450246 ignition[1345]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 23:43:11.450246 ignition[1345]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 23:43:11.461824 ignition[1345]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 23:43:11.465586 ignition[1345]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 23:43:11.469290 unknown[1345]: wrote ssh authorized keys file for user: core Nov 5 23:43:11.472089 ignition[1345]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 23:43:11.475540 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:43:11.475540 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 5 23:43:11.561786 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 23:43:11.698036 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:43:11.702551 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 23:43:11.737510 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:43:11.742679 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 23:43:11.742679 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:43:11.754568 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:43:11.754568 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:43:11.766637 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 5 23:43:12.082740 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 23:43:12.535833 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 23:43:12.535833 ignition[1345]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 23:43:12.546410 ignition[1345]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:43:12.556295 ignition[1345]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 23:43:12.556295 ignition[1345]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 23:43:12.556295 ignition[1345]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 23:43:12.570901 ignition[1345]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 23:43:12.570901 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:43:12.570901 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 23:43:12.570901 ignition[1345]: INFO : files: files passed Nov 5 23:43:12.570901 ignition[1345]: INFO : Ignition finished successfully Nov 5 23:43:12.571557 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 23:43:12.578177 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 23:43:12.587363 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 23:43:12.627715 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 23:43:12.629049 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 23:43:12.650457 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:43:12.650457 initrd-setup-root-after-ignition[1375]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:43:12.661892 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 23:43:12.670383 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:43:12.674177 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 23:43:12.688084 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 23:43:12.785039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 23:43:12.785247 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 23:43:12.789339 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 23:43:12.790894 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 23:43:12.798830 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 23:43:12.800491 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 23:43:12.858879 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:43:12.864695 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 23:43:12.903665 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:43:12.910343 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:43:12.916874 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 23:43:12.920734 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 23:43:12.921026 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 23:43:12.926474 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 23:43:12.936172 systemd[1]: Stopped target basic.target - Basic System. Nov 5 23:43:12.941819 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 23:43:12.945248 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 23:43:12.950757 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 23:43:12.959007 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 23:43:12.963255 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 23:43:12.970671 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 23:43:12.973930 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 23:43:12.977681 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 23:43:12.984477 systemd[1]: Stopped target swap.target - Swaps. Nov 5 23:43:12.991679 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 23:43:12.992096 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 23:43:12.997558 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:43:13.002823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:43:13.006729 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 23:43:13.013436 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:43:13.016863 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 23:43:13.017210 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 23:43:13.026191 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 23:43:13.026511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 23:43:13.031983 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 23:43:13.032381 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 23:43:13.043949 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 23:43:13.049010 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 23:43:13.065738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 23:43:13.069086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:43:13.075329 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 23:43:13.076653 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 23:43:13.095356 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 23:43:13.097865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 23:43:13.124103 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 23:43:13.137914 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 23:43:13.140932 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 23:43:13.148619 ignition[1399]: INFO : Ignition 2.22.0 Nov 5 23:43:13.150741 ignition[1399]: INFO : Stage: umount Nov 5 23:43:13.150741 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 23:43:13.150741 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 23:43:13.150741 ignition[1399]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 23:43:13.161492 ignition[1399]: INFO : PUT result: OK Nov 5 23:43:13.169098 ignition[1399]: INFO : umount: umount passed Nov 5 23:43:13.169098 ignition[1399]: INFO : Ignition finished successfully Nov 5 23:43:13.173641 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 23:43:13.176040 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 23:43:13.181468 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 23:43:13.184039 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 23:43:13.188960 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 23:43:13.189092 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 23:43:13.193770 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 23:43:13.193937 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 23:43:13.199422 systemd[1]: Stopped target network.target - Network. Nov 5 23:43:13.204900 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 23:43:13.205170 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 23:43:13.209897 systemd[1]: Stopped target paths.target - Path Units. Nov 5 23:43:13.214479 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 23:43:13.217261 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:43:13.220708 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 23:43:13.223957 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 23:43:13.228733 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 23:43:13.228823 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 23:43:13.231965 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 23:43:13.232054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 23:43:13.241082 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 23:43:13.241208 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 23:43:13.245098 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 23:43:13.245203 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 23:43:13.266346 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 23:43:13.266473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 23:43:13.269434 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 23:43:13.272243 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 23:43:13.284732 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 23:43:13.284996 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 23:43:13.300592 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 5 23:43:13.301071 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 23:43:13.301357 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 23:43:13.316872 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 5 23:43:13.318326 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 23:43:13.325373 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 23:43:13.325463 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:43:13.335198 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 23:43:13.344795 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 23:43:13.344976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 23:43:13.348205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 23:43:13.348328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:43:13.352113 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 23:43:13.352231 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 23:43:13.355049 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 23:43:13.355166 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:43:13.358442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:43:13.365919 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 5 23:43:13.366075 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:43:13.416021 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 23:43:13.416322 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 23:43:13.425010 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 23:43:13.425352 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:43:13.430851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 23:43:13.431002 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 23:43:13.439351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 23:43:13.439445 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:43:13.446779 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 23:43:13.446900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 23:43:13.454363 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 23:43:13.454498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 23:43:13.467644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 23:43:13.467786 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 23:43:13.482958 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 23:43:13.486415 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 23:43:13.486591 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:43:13.499993 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 23:43:13.500476 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:43:13.514554 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 23:43:13.514677 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:43:13.521254 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 23:43:13.521368 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:43:13.524826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 23:43:13.524940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:43:13.538439 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 5 23:43:13.538608 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 5 23:43:13.538700 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 5 23:43:13.538791 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 5 23:43:13.571056 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 23:43:13.571276 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 23:43:13.575112 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 23:43:13.588083 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 23:43:13.636146 systemd[1]: Switching root. Nov 5 23:43:13.692994 systemd-journald[259]: Journal stopped Nov 5 23:43:15.995092 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Nov 5 23:43:15.995263 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 23:43:15.995325 kernel: SELinux: policy capability open_perms=1 Nov 5 23:43:15.995369 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 23:43:15.995403 kernel: SELinux: policy capability always_check_network=0 Nov 5 23:43:15.995432 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 23:43:15.995464 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 23:43:15.995496 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 23:43:15.999659 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 23:43:15.999714 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 23:43:15.999744 kernel: audit: type=1403 audit(1762386194.007:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 23:43:15.999786 systemd[1]: Successfully loaded SELinux policy in 88.962ms. Nov 5 23:43:15.999848 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.879ms. Nov 5 23:43:15.999902 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 23:43:15.999949 systemd[1]: Detected virtualization amazon. Nov 5 23:43:15.999978 systemd[1]: Detected architecture arm64. Nov 5 23:43:16.000008 systemd[1]: Detected first boot. Nov 5 23:43:16.000044 systemd[1]: Initializing machine ID from VM UUID. Nov 5 23:43:16.000079 zram_generator::config[1443]: No configuration found. Nov 5 23:43:16.000115 kernel: NET: Registered PF_VSOCK protocol family Nov 5 23:43:16.000161 systemd[1]: Populated /etc with preset unit settings. Nov 5 23:43:16.000197 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 5 23:43:16.000230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 23:43:16.000262 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 23:43:16.000294 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 23:43:16.000328 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 23:43:16.000360 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 23:43:16.000392 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 23:43:16.000423 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 23:43:16.000459 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 23:43:16.000492 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 23:43:16.000567 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 23:43:16.000609 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 23:43:16.000644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 23:43:16.000676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 23:43:16.000716 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 23:43:16.000748 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 23:43:16.000781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 23:43:16.000824 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 23:43:16.000854 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 23:43:16.000887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 23:43:16.000920 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 23:43:16.000951 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 23:43:16.000982 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 23:43:16.001014 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 23:43:16.001052 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 23:43:16.001085 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 23:43:16.001118 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 23:43:16.001148 systemd[1]: Reached target slices.target - Slice Units. Nov 5 23:43:16.001177 systemd[1]: Reached target swap.target - Swaps. Nov 5 23:43:16.001209 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 23:43:16.001241 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 23:43:16.001273 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 23:43:16.001305 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 23:43:16.001334 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 23:43:16.001369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 23:43:16.001405 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 23:43:16.001434 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 23:43:16.001463 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 23:43:16.001497 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 23:43:16.009613 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 23:43:16.009680 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 23:43:16.009710 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 23:43:16.009754 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 23:43:16.009784 systemd[1]: Reached target machines.target - Containers. Nov 5 23:43:16.009813 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 23:43:16.009845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:43:16.009873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 23:43:16.009907 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 23:43:16.009936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:43:16.009967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:43:16.009996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:43:16.010030 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 23:43:16.010059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:43:16.010090 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 23:43:16.010120 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 23:43:16.010151 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 23:43:16.010188 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 23:43:16.010217 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 23:43:16.010248 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:43:16.010287 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 23:43:16.010317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 23:43:16.010346 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 23:43:16.010380 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 23:43:16.010409 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 23:43:16.010440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 23:43:16.010476 systemd[1]: verity-setup.service: Deactivated successfully. Nov 5 23:43:16.010506 systemd[1]: Stopped verity-setup.service. Nov 5 23:43:16.010576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 23:43:16.010617 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 23:43:16.010656 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 23:43:16.010687 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 23:43:16.010716 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 23:43:16.010745 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 23:43:16.011188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 23:43:16.028062 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 23:43:16.028132 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 23:43:16.028164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:43:16.028202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:43:16.028245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:43:16.028276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:43:16.028305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 23:43:16.028336 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 23:43:16.028366 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 23:43:16.028397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 23:43:16.028427 kernel: loop: module loaded Nov 5 23:43:16.028456 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 23:43:16.028486 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 23:43:16.028551 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:43:16.028617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:43:16.028749 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 23:43:16.028793 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 23:43:16.028825 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 23:43:16.028854 kernel: fuse: init (API version 7.41) Nov 5 23:43:16.028884 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 23:43:16.028916 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 23:43:16.029019 systemd-journald[1523]: Collecting audit messages is disabled. Nov 5 23:43:16.029084 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 23:43:16.029120 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 23:43:16.029150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:43:16.029190 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 23:43:16.029223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:43:16.029258 systemd-journald[1523]: Journal started Nov 5 23:43:16.029305 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec259ef45d7b6ee109de856ed9742c43) is 8M, max 75.3M, 67.3M free. Nov 5 23:43:15.181912 systemd[1]: Queued start job for default target multi-user.target. Nov 5 23:43:16.039986 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 23:43:16.040039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:43:15.209206 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 5 23:43:15.210242 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 23:43:16.051677 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 23:43:16.073933 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 23:43:16.088825 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Nov 5 23:43:16.088868 systemd-tmpfiles[1547]: ACLs are not supported, ignoring. Nov 5 23:43:16.089647 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 23:43:16.094174 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 23:43:16.095367 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 23:43:16.111651 kernel: ACPI: bus type drm_connector registered Nov 5 23:43:16.114259 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:43:16.116663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:43:16.121768 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 23:43:16.141898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 23:43:16.169116 kernel: loop0: detected capacity change from 0 to 119368 Nov 5 23:43:16.180973 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 23:43:16.191064 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 23:43:16.205490 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 23:43:16.211633 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 23:43:16.215937 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 23:43:16.226013 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 23:43:16.231422 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 23:43:16.287998 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 23:43:16.331779 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec259ef45d7b6ee109de856ed9742c43 is 174.288ms for 942 entries. Nov 5 23:43:16.331779 systemd-journald[1523]: System Journal (/var/log/journal/ec259ef45d7b6ee109de856ed9742c43) is 8M, max 195.6M, 187.6M free. Nov 5 23:43:16.526162 kernel: loop1: detected capacity change from 0 to 100632 Nov 5 23:43:16.528237 systemd-journald[1523]: Received client request to flush runtime journal. Nov 5 23:43:16.528334 kernel: loop2: detected capacity change from 0 to 61264 Nov 5 23:43:16.417756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 23:43:16.448812 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 23:43:16.455837 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 23:43:16.519361 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Nov 5 23:43:16.519386 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Nov 5 23:43:16.528870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 23:43:16.544323 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 23:43:16.579679 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 23:43:16.585608 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 23:43:16.600601 kernel: loop3: detected capacity change from 0 to 211168 Nov 5 23:43:16.667597 kernel: loop4: detected capacity change from 0 to 119368 Nov 5 23:43:16.713566 kernel: loop5: detected capacity change from 0 to 100632 Nov 5 23:43:16.749582 kernel: loop6: detected capacity change from 0 to 61264 Nov 5 23:43:16.785593 kernel: loop7: detected capacity change from 0 to 211168 Nov 5 23:43:16.828376 (sd-merge)[1606]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 5 23:43:16.829913 (sd-merge)[1606]: Merged extensions into '/usr'. Nov 5 23:43:16.842731 systemd[1]: Reload requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 23:43:16.842763 systemd[1]: Reloading... Nov 5 23:43:17.080568 zram_generator::config[1632]: No configuration found. Nov 5 23:43:17.272136 ldconfig[1553]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 23:43:17.553510 systemd[1]: Reloading finished in 709 ms. Nov 5 23:43:17.585638 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 23:43:17.592811 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 23:43:17.621853 systemd[1]: Starting ensure-sysext.service... Nov 5 23:43:17.631025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 23:43:17.682422 systemd[1]: Reload requested from client PID 1684 ('systemctl') (unit ensure-sysext.service)... Nov 5 23:43:17.682474 systemd[1]: Reloading... Nov 5 23:43:17.711293 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 23:43:17.712215 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 23:43:17.713217 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 23:43:17.714063 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 23:43:17.716230 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 23:43:17.717142 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Nov 5 23:43:17.717542 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Nov 5 23:43:17.727593 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:43:17.727883 systemd-tmpfiles[1685]: Skipping /boot Nov 5 23:43:17.748910 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 23:43:17.749097 systemd-tmpfiles[1685]: Skipping /boot Nov 5 23:43:17.862568 zram_generator::config[1715]: No configuration found. Nov 5 23:43:18.282765 systemd[1]: Reloading finished in 599 ms. Nov 5 23:43:18.314039 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 23:43:18.337051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 23:43:18.357622 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:43:18.367126 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 23:43:18.375882 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 23:43:18.388789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 23:43:18.396411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 23:43:18.409312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 23:43:18.420754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:43:18.428238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:43:18.435580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:43:18.456134 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 23:43:18.459034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:43:18.459318 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:43:18.471757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 23:43:18.485132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:43:18.485644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:43:18.485914 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:43:18.488901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:43:18.491711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:43:18.496901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:43:18.500784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:43:18.504255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:43:18.529990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 23:43:18.546249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 23:43:18.554566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 23:43:18.565092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 23:43:18.568411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 23:43:18.568754 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 23:43:18.569126 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 23:43:18.574831 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 23:43:18.575350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 23:43:18.605169 systemd[1]: Finished ensure-sysext.service. Nov 5 23:43:18.608898 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 23:43:18.619831 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 23:43:18.629908 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 23:43:18.686297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 23:43:18.687018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 23:43:18.696793 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 23:43:18.697872 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 23:43:18.701847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 23:43:18.703847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 23:43:18.709155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 23:43:18.709376 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 23:43:18.732622 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 23:43:18.756967 systemd-udevd[1770]: Using default interface naming scheme 'v255'. Nov 5 23:43:18.765920 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 23:43:18.772430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 23:43:18.806317 augenrules[1810]: No rules Nov 5 23:43:18.809450 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:43:18.813684 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:43:18.817070 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 23:43:18.858916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 23:43:18.867373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 23:43:19.189792 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 23:43:19.190685 systemd-resolved[1769]: Positive Trust Anchors: Nov 5 23:43:19.190726 systemd-resolved[1769]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 23:43:19.190788 systemd-resolved[1769]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 23:43:19.216286 systemd-resolved[1769]: Defaulting to hostname 'linux'. Nov 5 23:43:19.223638 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 23:43:19.226554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 23:43:19.229497 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 23:43:19.232918 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 23:43:19.236385 (udev-worker)[1835]: Network interface NamePolicy= disabled on kernel command line. Nov 5 23:43:19.236810 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 23:43:19.240302 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 23:43:19.243325 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 23:43:19.246493 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 23:43:19.249679 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 23:43:19.249745 systemd[1]: Reached target paths.target - Path Units. Nov 5 23:43:19.252797 systemd[1]: Reached target timers.target - Timer Units. Nov 5 23:43:19.258840 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 23:43:19.266566 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 23:43:19.278757 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 23:43:19.282996 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 23:43:19.286197 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 23:43:19.313049 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 23:43:19.316416 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 23:43:19.322898 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 23:43:19.326268 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 23:43:19.328871 systemd[1]: Reached target basic.target - Basic System. Nov 5 23:43:19.331977 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:43:19.332054 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 23:43:19.338066 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 23:43:19.344974 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 23:43:19.354959 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 23:43:19.363933 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 23:43:19.371047 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 23:43:19.373755 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 23:43:19.380101 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 23:43:19.393018 systemd[1]: Started ntpd.service - Network Time Service. Nov 5 23:43:19.410910 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 23:43:19.423361 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 5 23:43:19.437975 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 23:43:19.449778 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 23:43:19.464224 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 23:43:19.469745 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 23:43:19.472859 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 23:43:19.481998 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 23:43:19.500725 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 23:43:19.505135 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 23:43:19.508078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 23:43:19.556626 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 23:43:19.603025 jq[1867]: false Nov 5 23:43:19.685780 dbus-daemon[1864]: [system] SELinux support is enabled Nov 5 23:43:19.701277 systemd-networkd[1825]: lo: Link UP Nov 5 23:43:19.702008 systemd-networkd[1825]: lo: Gained carrier Nov 5 23:43:19.705423 systemd-networkd[1825]: Enumeration completed Nov 5 23:43:19.737624 jq[1884]: true Nov 5 23:43:19.756498 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 23:43:19.766097 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 23:43:19.771769 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 23:43:19.772416 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 23:43:19.782166 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 23:43:19.782752 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 23:43:19.800312 systemd[1]: Reached target network.target - Network. Nov 5 23:43:19.812884 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 23:43:19.815839 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 23:43:19.815907 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 23:43:19.830408 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 23:43:19.833502 extend-filesystems[1868]: Found /dev/nvme0n1p6 Nov 5 23:43:19.843481 update_engine[1882]: I20251105 23:43:19.842875 1882 main.cc:92] Flatcar Update Engine starting Nov 5 23:43:19.844015 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 23:43:19.852343 extend-filesystems[1868]: Found /dev/nvme0n1p9 Nov 5 23:43:19.847452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 23:43:19.847566 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 23:43:19.860064 coreos-metadata[1863]: Nov 05 23:43:19.859 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 23:43:19.873381 systemd[1]: Started update-engine.service - Update Engine. Nov 5 23:43:19.882568 extend-filesystems[1868]: Checking size of /dev/nvme0n1p9 Nov 5 23:43:19.895779 update_engine[1882]: I20251105 23:43:19.893718 1882 update_check_scheduler.cc:74] Next update check in 2m14s Nov 5 23:43:19.942699 jq[1926]: true Nov 5 23:43:19.945475 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 23:43:19.955567 extend-filesystems[1868]: Resized partition /dev/nvme0n1p9 Nov 5 23:43:19.967844 tar[1899]: linux-arm64/LICENSE Nov 5 23:43:19.967844 tar[1899]: linux-arm64/helm Nov 5 23:43:19.973856 extend-filesystems[1965]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 23:43:19.972743 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:43:19.972752 systemd-networkd[1825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 23:43:20.021553 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 5 23:43:20.156568 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 5 23:43:20.173498 (ntainerd)[1978]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 23:43:20.184048 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 5 23:43:20.189895 extend-filesystems[1965]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 5 23:43:20.189895 extend-filesystems[1965]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 5 23:43:20.189895 extend-filesystems[1965]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 5 23:43:20.210703 extend-filesystems[1868]: Resized filesystem in /dev/nvme0n1p9 Nov 5 23:43:20.196204 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 23:43:20.196785 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 23:43:20.217631 systemd-networkd[1825]: eth0: Link UP Nov 5 23:43:20.226150 systemd-networkd[1825]: eth0: Gained carrier Nov 5 23:43:20.226200 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 5 23:43:20.231426 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 23:43:20.257583 systemd-networkd[1825]: eth0: DHCPv4 address 172.31.26.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 23:43:20.258360 dbus-daemon[1864]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1825 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 23:43:20.272437 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 23:43:20.337383 bash[2006]: Updated "/home/core/.ssh/authorized_keys" Nov 5 23:43:20.348509 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 23:43:20.357221 systemd[1]: Starting sshkeys.service... Nov 5 23:43:20.525379 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 23:43:20.536788 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 23:43:20.614433 ntpd[1870]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:33:09 UTC 2025 (1): Starting Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:33:09 UTC 2025 (1): Starting Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: ---------------------------------------------------- Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: ntp-4 is maintained by Network Time Foundation, Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: corporation. Support and training for ntp-4 are Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: available at https://www.nwtime.org/support Nov 5 23:43:20.624998 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: ---------------------------------------------------- Nov 5 23:43:20.623473 ntpd[1870]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 23:43:20.623494 ntpd[1870]: ---------------------------------------------------- Nov 5 23:43:20.623512 ntpd[1870]: ntp-4 is maintained by Network Time Foundation, Nov 5 23:43:20.623568 ntpd[1870]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 23:43:20.623586 ntpd[1870]: corporation. Support and training for ntp-4 are Nov 5 23:43:20.623625 ntpd[1870]: available at https://www.nwtime.org/support Nov 5 23:43:20.623644 ntpd[1870]: ---------------------------------------------------- Nov 5 23:43:20.638106 ntpd[1870]: proto: precision = 0.096 usec (-23) Nov 5 23:43:20.640835 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: proto: precision = 0.096 usec (-23) Nov 5 23:43:20.646758 ntpd[1870]: basedate set to 2025-10-24 Nov 5 23:43:20.649235 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: basedate set to 2025-10-24 Nov 5 23:43:20.649235 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: gps base set to 2025-10-26 (week 2390) Nov 5 23:43:20.649235 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 23:43:20.649235 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 23:43:20.646808 ntpd[1870]: gps base set to 2025-10-26 (week 2390) Nov 5 23:43:20.647066 ntpd[1870]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 23:43:20.647129 ntpd[1870]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 23:43:20.649842 ntpd[1870]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 23:43:20.652741 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 23:43:20.652741 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: Listen normally on 3 eth0 172.31.26.188:123 Nov 5 23:43:20.652741 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: Listen normally on 4 lo [::1]:123 Nov 5 23:43:20.652741 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: bind(21) AF_INET6 [fe80::40f:6cff:fef9:bb6f%2]:123 flags 0x811 failed: Cannot assign requested address Nov 5 23:43:20.652741 ntpd[1870]: 5 Nov 23:43:20 ntpd[1870]: unable to create socket on eth0 (5) for [fe80::40f:6cff:fef9:bb6f%2]:123 Nov 5 23:43:20.649915 ntpd[1870]: Listen normally on 3 eth0 172.31.26.188:123 Nov 5 23:43:20.649976 ntpd[1870]: Listen normally on 4 lo [::1]:123 Nov 5 23:43:20.650030 ntpd[1870]: bind(21) AF_INET6 [fe80::40f:6cff:fef9:bb6f%2]:123 flags 0x811 failed: Cannot assign requested address Nov 5 23:43:20.650075 ntpd[1870]: unable to create socket on eth0 (5) for [fe80::40f:6cff:fef9:bb6f%2]:123 Nov 5 23:43:20.668784 systemd-coredump[2041]: Process 1870 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 5 23:43:20.680221 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 5 23:43:20.686758 locksmithd[1941]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 23:43:20.690275 systemd[1]: Started systemd-coredump@0-2041-0.service - Process Core Dump (PID 2041/UID 0). Nov 5 23:43:20.746381 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 23:43:20.799414 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 23:43:20.846809 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 23:43:20.923583 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 23:43:21.016696 coreos-metadata[1863]: Nov 05 23:43:21.016 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Nov 5 23:43:21.031813 coreos-metadata[1863]: Nov 05 23:43:21.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 5 23:43:21.037408 coreos-metadata[1863]: Nov 05 23:43:21.037 INFO Fetch successful Nov 5 23:43:21.037408 coreos-metadata[1863]: Nov 05 23:43:21.037 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 5 23:43:21.044224 coreos-metadata[1863]: Nov 05 23:43:21.043 INFO Fetch successful Nov 5 23:43:21.044224 coreos-metadata[1863]: Nov 05 23:43:21.043 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 5 23:43:21.050666 coreos-metadata[1863]: Nov 05 23:43:21.047 INFO Fetch successful Nov 5 23:43:21.050666 coreos-metadata[1863]: Nov 05 23:43:21.047 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 5 23:43:21.052253 coreos-metadata[1863]: Nov 05 23:43:21.051 INFO Fetch successful Nov 5 23:43:21.052253 coreos-metadata[1863]: Nov 05 23:43:21.052 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 5 23:43:21.056455 coreos-metadata[1863]: Nov 05 23:43:21.055 INFO Fetch failed with 404: resource not found Nov 5 23:43:21.056455 coreos-metadata[1863]: Nov 05 23:43:21.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 5 23:43:21.060749 coreos-metadata[1863]: Nov 05 23:43:21.059 INFO Fetch successful Nov 5 23:43:21.060749 coreos-metadata[1863]: Nov 05 23:43:21.059 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 5 23:43:21.061500 coreos-metadata[1863]: Nov 05 23:43:21.061 INFO Fetch successful Nov 5 23:43:21.064282 coreos-metadata[1863]: Nov 05 23:43:21.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 5 23:43:21.076757 coreos-metadata[1863]: Nov 05 23:43:21.076 INFO Fetch successful Nov 5 23:43:21.077071 coreos-metadata[1863]: Nov 05 23:43:21.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 5 23:43:21.080216 coreos-metadata[1863]: Nov 05 23:43:21.079 INFO Fetch successful Nov 5 23:43:21.080216 coreos-metadata[1863]: Nov 05 23:43:21.079 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 5 23:43:21.084892 systemd-logind[1881]: New seat seat0. Nov 5 23:43:21.087054 coreos-metadata[1863]: Nov 05 23:43:21.085 INFO Fetch successful Nov 5 23:43:21.088488 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 23:43:21.122241 coreos-metadata[2031]: Nov 05 23:43:21.122 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 23:43:21.131631 coreos-metadata[2031]: Nov 05 23:43:21.131 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 5 23:43:21.133797 coreos-metadata[2031]: Nov 05 23:43:21.133 INFO Fetch successful Nov 5 23:43:21.133797 coreos-metadata[2031]: Nov 05 23:43:21.133 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 5 23:43:21.136834 coreos-metadata[2031]: Nov 05 23:43:21.136 INFO Fetch successful Nov 5 23:43:21.144723 unknown[2031]: wrote ssh authorized keys file for user: core Nov 5 23:43:21.216670 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 23:43:21.224970 containerd[1978]: time="2025-11-05T23:43:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 23:43:21.229227 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 23:43:21.234390 containerd[1978]: time="2025-11-05T23:43:21.233987411Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 23:43:21.294208 update-ssh-keys[2059]: Updated "/home/core/.ssh/authorized_keys" Nov 5 23:43:21.299002 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 23:43:21.306643 systemd[1]: Finished sshkeys.service. Nov 5 23:43:21.339796 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 23:43:21.346073 dbus-daemon[1864]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 23:43:21.349046 dbus-daemon[1864]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2005 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 23:43:21.353564 containerd[1978]: time="2025-11-05T23:43:21.351798539Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.532µs" Nov 5 23:43:21.353564 containerd[1978]: time="2025-11-05T23:43:21.351875003Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 23:43:21.353564 containerd[1978]: time="2025-11-05T23:43:21.352000187Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 23:43:21.353564 containerd[1978]: time="2025-11-05T23:43:21.352361555Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 23:43:21.353564 containerd[1978]: time="2025-11-05T23:43:21.352409363Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 23:43:21.353564 containerd[1978]: time="2025-11-05T23:43:21.352473695Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:43:21.357635 containerd[1978]: time="2025-11-05T23:43:21.356840723Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 23:43:21.357635 containerd[1978]: time="2025-11-05T23:43:21.356902943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:43:21.357635 containerd[1978]: time="2025-11-05T23:43:21.357314039Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 23:43:21.357635 containerd[1978]: time="2025-11-05T23:43:21.357350327Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:43:21.357635 containerd[1978]: time="2025-11-05T23:43:21.357379763Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 23:43:21.357635 containerd[1978]: time="2025-11-05T23:43:21.357403379Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 23:43:21.360471 containerd[1978]: time="2025-11-05T23:43:21.359588051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 23:43:21.365577 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 23:43:21.368268 containerd[1978]: time="2025-11-05T23:43:21.366606755Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:43:21.368268 containerd[1978]: time="2025-11-05T23:43:21.366716939Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 23:43:21.368268 containerd[1978]: time="2025-11-05T23:43:21.366742919Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 23:43:21.368268 containerd[1978]: time="2025-11-05T23:43:21.366820343Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 23:43:21.368268 containerd[1978]: time="2025-11-05T23:43:21.367758455Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 23:43:21.368268 containerd[1978]: time="2025-11-05T23:43:21.367947683Z" level=info msg="metadata content store policy set" policy=shared Nov 5 23:43:21.393123 containerd[1978]: time="2025-11-05T23:43:21.393022475Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 23:43:21.393549 containerd[1978]: time="2025-11-05T23:43:21.393433943Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 23:43:21.393549 containerd[1978]: time="2025-11-05T23:43:21.393541595Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 23:43:21.393678 containerd[1978]: time="2025-11-05T23:43:21.393596735Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 23:43:21.393678 containerd[1978]: time="2025-11-05T23:43:21.393633143Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 23:43:21.393761 containerd[1978]: time="2025-11-05T23:43:21.393676799Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 23:43:21.393761 containerd[1978]: time="2025-11-05T23:43:21.393719627Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 23:43:21.393901 containerd[1978]: time="2025-11-05T23:43:21.393778439Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 23:43:21.393901 containerd[1978]: time="2025-11-05T23:43:21.393813659Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 23:43:21.393901 containerd[1978]: time="2025-11-05T23:43:21.393852899Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 23:43:21.394018 containerd[1978]: time="2025-11-05T23:43:21.393932375Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 23:43:21.394018 containerd[1978]: time="2025-11-05T23:43:21.393987443Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 23:43:21.394361 containerd[1978]: time="2025-11-05T23:43:21.394285283Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 23:43:21.394447 containerd[1978]: time="2025-11-05T23:43:21.394363055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 23:43:21.394495 containerd[1978]: time="2025-11-05T23:43:21.394424363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 23:43:21.394495 containerd[1978]: time="2025-11-05T23:43:21.394470359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 23:43:21.409872 containerd[1978]: time="2025-11-05T23:43:21.394510571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 23:43:21.410483 containerd[1978]: time="2025-11-05T23:43:21.409835591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 23:43:21.413256 containerd[1978]: time="2025-11-05T23:43:21.412930764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 23:43:21.417558 containerd[1978]: time="2025-11-05T23:43:21.416396976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 23:43:21.417558 containerd[1978]: time="2025-11-05T23:43:21.416962260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 23:43:21.418807 containerd[1978]: time="2025-11-05T23:43:21.418669908Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 23:43:21.425035 containerd[1978]: time="2025-11-05T23:43:21.424910052Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 23:43:21.425926 containerd[1978]: time="2025-11-05T23:43:21.425464188Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 23:43:21.425926 containerd[1978]: time="2025-11-05T23:43:21.425576784Z" level=info msg="Start snapshots syncer" Nov 5 23:43:21.425926 containerd[1978]: time="2025-11-05T23:43:21.425644404Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 23:43:21.426430 containerd[1978]: time="2025-11-05T23:43:21.426314268Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 23:43:21.440192 containerd[1978]: time="2025-11-05T23:43:21.426496044Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 23:43:21.442396 containerd[1978]: time="2025-11-05T23:43:21.441055392Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443027460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443132856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443168484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443203284Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443238660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443270544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443302320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443364708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443396904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443429208Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443494752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443620620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 23:43:21.444256 containerd[1978]: time="2025-11-05T23:43:21.443658132Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.443688876Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.443713992Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.443756148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.443789700Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.443990436Z" level=info msg="runtime interface created" Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.444016380Z" level=info msg="created NRI interface" Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.444048384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.444092676Z" level=info msg="Connect containerd service" Nov 5 23:43:21.445006 containerd[1978]: time="2025-11-05T23:43:21.444185856Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 23:43:21.449385 containerd[1978]: time="2025-11-05T23:43:21.446223960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 23:43:21.529930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 23:43:21.649585 systemd-logind[1881]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 5 23:43:21.702657 systemd-coredump[2042]: Process 1870 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1870: #0 0x0000aaaab37a0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab374fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab3750240 n/a (ntpd + 0x10240) #3 0x0000aaaab374be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab374d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab3755a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab374738c n/a (ntpd + 0x738c) #7 0x0000ffff8c1b2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff8c1b2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab37473f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Nov 5 23:43:21.727341 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 5 23:43:21.728261 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 5 23:43:21.737780 systemd[1]: systemd-coredump@0-2041-0.service: Deactivated successfully. Nov 5 23:43:21.770279 systemd-logind[1881]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 23:43:21.889739 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 5 23:43:21.895230 systemd[1]: Started ntpd.service - Network Time Service. Nov 5 23:43:21.902407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 23:43:21.932752 systemd-networkd[1825]: eth0: Gained IPv6LL Nov 5 23:43:21.941159 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 23:43:21.948700 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 23:43:21.955138 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 5 23:43:21.963293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:21.976310 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.050717951Z" level=info msg="Start subscribing containerd event" Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.050845943Z" level=info msg="Start recovering state" Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.051015011Z" level=info msg="Start event monitor" Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.051041459Z" level=info msg="Start cni network conf syncer for default" Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.051059783Z" level=info msg="Start streaming server" Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.051081059Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.051097811Z" level=info msg="runtime interface starting up..." Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.051113243Z" level=info msg="starting plugins..." Nov 5 23:43:22.051637 containerd[1978]: time="2025-11-05T23:43:22.051143999Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 23:43:22.062214 containerd[1978]: time="2025-11-05T23:43:22.053210267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 23:43:22.062214 containerd[1978]: time="2025-11-05T23:43:22.053331047Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 23:43:22.053619 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 23:43:22.064278 containerd[1978]: time="2025-11-05T23:43:22.064149035Z" level=info msg="containerd successfully booted in 0.847086s" Nov 5 23:43:22.170141 amazon-ssm-agent[2110]: Initializing new seelog logger Nov 5 23:43:22.171580 amazon-ssm-agent[2110]: New Seelog Logger Creation Complete Nov 5 23:43:22.173552 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.173552 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.173552 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 processing appconfig overrides Nov 5 23:43:22.175007 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.178568 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.178568 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 processing appconfig overrides Nov 5 23:43:22.178568 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.178568 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.178568 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 processing appconfig overrides Nov 5 23:43:22.179807 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.1747 INFO Proxy environment variables: Nov 5 23:43:22.185412 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.185884 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:22.186132 amazon-ssm-agent[2110]: 2025/11/05 23:43:22 processing appconfig overrides Nov 5 23:43:22.193713 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 23:43:22.282594 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.1749 INFO no_proxy: Nov 5 23:43:22.381999 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.1749 INFO https_proxy: Nov 5 23:43:22.483578 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.1749 INFO http_proxy: Nov 5 23:43:22.531880 tar[1899]: linux-arm64/README.md Nov 5 23:43:22.556483 ntpd[2103]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:33:09 UTC 2025 (1): Starting Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:33:09 UTC 2025 (1): Starting Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: ---------------------------------------------------- Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: ntp-4 is maintained by Network Time Foundation, Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: corporation. Support and training for ntp-4 are Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: available at https://www.nwtime.org/support Nov 5 23:43:22.561050 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: ---------------------------------------------------- Nov 5 23:43:22.556621 ntpd[2103]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 23:43:22.556642 ntpd[2103]: ---------------------------------------------------- Nov 5 23:43:22.556659 ntpd[2103]: ntp-4 is maintained by Network Time Foundation, Nov 5 23:43:22.556675 ntpd[2103]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 23:43:22.556692 ntpd[2103]: corporation. Support and training for ntp-4 are Nov 5 23:43:22.556707 ntpd[2103]: available at https://www.nwtime.org/support Nov 5 23:43:22.556724 ntpd[2103]: ---------------------------------------------------- Nov 5 23:43:22.566873 ntpd[2103]: proto: precision = 0.096 usec (-23) Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: proto: precision = 0.096 usec (-23) Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: basedate set to 2025-10-24 Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: gps base set to 2025-10-26 (week 2390) Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Listen normally on 3 eth0 172.31.26.188:123 Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Listen normally on 4 lo [::1]:123 Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Listen normally on 5 eth0 [fe80::40f:6cff:fef9:bb6f%2]:123 Nov 5 23:43:22.569723 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: Listening on routing socket on fd #22 for interface updates Nov 5 23:43:22.567264 ntpd[2103]: basedate set to 2025-10-24 Nov 5 23:43:22.567306 ntpd[2103]: gps base set to 2025-10-26 (week 2390) Nov 5 23:43:22.567449 ntpd[2103]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 23:43:22.567497 ntpd[2103]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 23:43:22.567860 ntpd[2103]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 23:43:22.567910 ntpd[2103]: Listen normally on 3 eth0 172.31.26.188:123 Nov 5 23:43:22.567958 ntpd[2103]: Listen normally on 4 lo [::1]:123 Nov 5 23:43:22.568006 ntpd[2103]: Listen normally on 5 eth0 [fe80::40f:6cff:fef9:bb6f%2]:123 Nov 5 23:43:22.568048 ntpd[2103]: Listening on routing socket on fd #22 for interface updates Nov 5 23:43:22.593636 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.1767 INFO Checking if agent identity type OnPrem can be assumed Nov 5 23:43:22.601649 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 23:43:22.601846 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 23:43:22.607174 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 23:43:22.608684 ntpd[2103]: 5 Nov 23:43:22 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 23:43:22.659632 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 23:43:22.693633 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.1768 INFO Checking if agent identity type EC2 can be assumed Nov 5 23:43:22.793409 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3407 INFO Agent will take identity from EC2 Nov 5 23:43:22.898539 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3472 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 5 23:43:22.995945 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3472 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 5 23:43:23.078580 polkitd[2072]: Started polkitd version 126 Nov 5 23:43:23.098553 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3472 INFO [amazon-ssm-agent] Starting Core Agent Nov 5 23:43:23.100355 polkitd[2072]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 23:43:23.103376 polkitd[2072]: Loading rules from directory /run/polkit-1/rules.d Nov 5 23:43:23.103656 polkitd[2072]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 23:43:23.104330 polkitd[2072]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 23:43:23.104388 polkitd[2072]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 23:43:23.104471 polkitd[2072]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 23:43:23.109932 polkitd[2072]: Finished loading, compiling and executing 2 rules Nov 5 23:43:23.112405 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 23:43:23.119957 dbus-daemon[1864]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 23:43:23.121039 polkitd[2072]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 23:43:23.159147 systemd-resolved[1769]: System hostname changed to 'ip-172-31-26-188'. Nov 5 23:43:23.159859 systemd-hostnamed[2005]: Hostname set to (transient) Nov 5 23:43:23.198150 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3473 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 5 23:43:23.298644 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3473 INFO [Registrar] Starting registrar module Nov 5 23:43:23.398760 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3529 INFO [EC2Identity] Checking disk for registration info Nov 5 23:43:23.499131 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3530 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 5 23:43:23.599463 amazon-ssm-agent[2110]: 2025-11-05 23:43:22.3530 INFO [EC2Identity] Generating registration keypair Nov 5 23:43:23.772590 sshd_keygen[1935]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 23:43:23.853360 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 23:43:23.862900 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 23:43:23.872041 systemd[1]: Started sshd@0-172.31.26.188:22-147.75.109.163:59796.service - OpenSSH per-connection server daemon (147.75.109.163:59796). Nov 5 23:43:23.900590 amazon-ssm-agent[2110]: 2025-11-05 23:43:23.8990 INFO [EC2Identity] Checking write access before registering Nov 5 23:43:23.916786 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 23:43:23.917299 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 23:43:23.930826 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 23:43:23.960386 amazon-ssm-agent[2110]: 2025/11/05 23:43:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:23.966027 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 23:43:23.966027 amazon-ssm-agent[2110]: 2025/11/05 23:43:23 processing appconfig overrides Nov 5 23:43:23.989718 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 23:43:24.000358 amazon-ssm-agent[2110]: 2025-11-05 23:43:23.9000 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 5 23:43:24.001011 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 23:43:24.011173 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 23:43:24.014230 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 23:43:24.028373 amazon-ssm-agent[2110]: 2025-11-05 23:43:23.9596 INFO [EC2Identity] EC2 registration was successful. Nov 5 23:43:24.028587 amazon-ssm-agent[2110]: 2025-11-05 23:43:23.9601 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 5 23:43:24.028706 amazon-ssm-agent[2110]: 2025-11-05 23:43:23.9602 INFO [CredentialRefresher] credentialRefresher has started Nov 5 23:43:24.029896 amazon-ssm-agent[2110]: 2025-11-05 23:43:23.9602 INFO [CredentialRefresher] Starting credentials refresher loop Nov 5 23:43:24.029896 amazon-ssm-agent[2110]: 2025-11-05 23:43:24.0275 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 5 23:43:24.029896 amazon-ssm-agent[2110]: 2025-11-05 23:43:24.0279 INFO [CredentialRefresher] Credentials ready Nov 5 23:43:24.099916 amazon-ssm-agent[2110]: 2025-11-05 23:43:24.0298 INFO [CredentialRefresher] Next credential rotation will be in 29.999961797483333 minutes Nov 5 23:43:24.165749 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 59796 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:43:24.173795 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:43:24.213634 systemd-logind[1881]: New session 1 of user core. Nov 5 23:43:24.219243 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 23:43:24.224250 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 23:43:24.229243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:24.236946 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 23:43:24.252164 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:43:24.278586 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 23:43:24.289021 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 23:43:24.311548 (systemd)[2249]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 23:43:24.318172 systemd-logind[1881]: New session c1 of user core. Nov 5 23:43:24.619344 systemd[2249]: Queued start job for default target default.target. Nov 5 23:43:24.627704 systemd[2249]: Created slice app.slice - User Application Slice. Nov 5 23:43:24.627774 systemd[2249]: Reached target paths.target - Paths. Nov 5 23:43:24.627863 systemd[2249]: Reached target timers.target - Timers. Nov 5 23:43:24.630637 systemd[2249]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 23:43:24.665054 systemd[2249]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 23:43:24.665326 systemd[2249]: Reached target sockets.target - Sockets. Nov 5 23:43:24.665440 systemd[2249]: Reached target basic.target - Basic System. Nov 5 23:43:24.665578 systemd[2249]: Reached target default.target - Main User Target. Nov 5 23:43:24.665646 systemd[2249]: Startup finished in 333ms. Nov 5 23:43:24.666014 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 23:43:24.677968 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 23:43:24.682337 systemd[1]: Startup finished in 3.852s (kernel) + 9.284s (initrd) + 10.764s (userspace) = 23.901s. Nov 5 23:43:24.845266 systemd[1]: Started sshd@1-172.31.26.188:22-147.75.109.163:59798.service - OpenSSH per-connection server daemon (147.75.109.163:59798). Nov 5 23:43:25.052897 sshd[2268]: Accepted publickey for core from 147.75.109.163 port 59798 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:43:25.056626 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:43:25.072620 systemd-logind[1881]: New session 2 of user core. Nov 5 23:43:25.078110 amazon-ssm-agent[2110]: 2025-11-05 23:43:25.0779 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 5 23:43:25.079843 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 23:43:25.180074 amazon-ssm-agent[2110]: 2025-11-05 23:43:25.0820 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2273) started Nov 5 23:43:25.280955 sshd[2274]: Connection closed by 147.75.109.163 port 59798 Nov 5 23:43:25.282509 amazon-ssm-agent[2110]: 2025-11-05 23:43:25.0821 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 5 23:43:25.283012 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Nov 5 23:43:25.294585 systemd[1]: sshd@1-172.31.26.188:22-147.75.109.163:59798.service: Deactivated successfully. Nov 5 23:43:25.300458 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 23:43:25.311920 systemd-logind[1881]: Session 2 logged out. Waiting for processes to exit. Nov 5 23:43:25.332112 systemd[1]: Started sshd@2-172.31.26.188:22-147.75.109.163:59814.service - OpenSSH per-connection server daemon (147.75.109.163:59814). Nov 5 23:43:25.334779 systemd-logind[1881]: Removed session 2. Nov 5 23:43:25.447641 kubelet[2246]: E1105 23:43:25.447171 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:43:25.457797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:43:25.458103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:43:25.459779 systemd[1]: kubelet.service: Consumed 1.546s CPU time, 260.9M memory peak. Nov 5 23:43:25.566712 sshd[2285]: Accepted publickey for core from 147.75.109.163 port 59814 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:43:25.569483 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:43:25.577062 systemd-logind[1881]: New session 3 of user core. Nov 5 23:43:25.585777 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 23:43:25.703860 sshd[2295]: Connection closed by 147.75.109.163 port 59814 Nov 5 23:43:25.704708 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Nov 5 23:43:25.712702 systemd[1]: sshd@2-172.31.26.188:22-147.75.109.163:59814.service: Deactivated successfully. Nov 5 23:43:25.716969 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 23:43:25.718508 systemd-logind[1881]: Session 3 logged out. Waiting for processes to exit. Nov 5 23:43:25.721342 systemd-logind[1881]: Removed session 3. Nov 5 23:43:25.754665 systemd[1]: Started sshd@3-172.31.26.188:22-147.75.109.163:59828.service - OpenSSH per-connection server daemon (147.75.109.163:59828). Nov 5 23:43:25.950718 sshd[2301]: Accepted publickey for core from 147.75.109.163 port 59828 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:43:25.953441 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:43:25.961426 systemd-logind[1881]: New session 4 of user core. Nov 5 23:43:25.966784 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 23:43:26.092991 sshd[2304]: Connection closed by 147.75.109.163 port 59828 Nov 5 23:43:26.092128 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Nov 5 23:43:26.099197 systemd[1]: sshd@3-172.31.26.188:22-147.75.109.163:59828.service: Deactivated successfully. Nov 5 23:43:26.104934 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 23:43:26.108923 systemd-logind[1881]: Session 4 logged out. Waiting for processes to exit. Nov 5 23:43:26.111851 systemd-logind[1881]: Removed session 4. Nov 5 23:43:26.129640 systemd[1]: Started sshd@4-172.31.26.188:22-147.75.109.163:59844.service - OpenSSH per-connection server daemon (147.75.109.163:59844). Nov 5 23:43:26.320439 sshd[2310]: Accepted publickey for core from 147.75.109.163 port 59844 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:43:26.322738 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:43:26.330571 systemd-logind[1881]: New session 5 of user core. Nov 5 23:43:26.343773 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 23:43:26.458236 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 23:43:26.458875 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:43:26.477890 sudo[2314]: pam_unix(sudo:session): session closed for user root Nov 5 23:43:26.501175 sshd[2313]: Connection closed by 147.75.109.163 port 59844 Nov 5 23:43:26.502238 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Nov 5 23:43:26.509830 systemd[1]: sshd@4-172.31.26.188:22-147.75.109.163:59844.service: Deactivated successfully. Nov 5 23:43:26.513454 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 23:43:26.516868 systemd-logind[1881]: Session 5 logged out. Waiting for processes to exit. Nov 5 23:43:26.520191 systemd-logind[1881]: Removed session 5. Nov 5 23:43:26.534489 systemd[1]: Started sshd@5-172.31.26.188:22-147.75.109.163:59860.service - OpenSSH per-connection server daemon (147.75.109.163:59860). Nov 5 23:43:26.734209 sshd[2320]: Accepted publickey for core from 147.75.109.163 port 59860 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:43:26.737071 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:43:26.744431 systemd-logind[1881]: New session 6 of user core. Nov 5 23:43:26.753798 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 23:43:26.856154 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 23:43:26.856852 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:43:26.866204 sudo[2325]: pam_unix(sudo:session): session closed for user root Nov 5 23:43:26.875946 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 23:43:26.876568 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:43:26.893635 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 23:43:26.957046 augenrules[2347]: No rules Nov 5 23:43:26.959481 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 23:43:26.961648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 23:43:26.963802 sudo[2324]: pam_unix(sudo:session): session closed for user root Nov 5 23:43:26.986978 sshd[2323]: Connection closed by 147.75.109.163 port 59860 Nov 5 23:43:26.987744 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Nov 5 23:43:26.995178 systemd[1]: sshd@5-172.31.26.188:22-147.75.109.163:59860.service: Deactivated successfully. Nov 5 23:43:26.998366 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 23:43:27.001093 systemd-logind[1881]: Session 6 logged out. Waiting for processes to exit. Nov 5 23:43:27.004424 systemd-logind[1881]: Removed session 6. Nov 5 23:43:27.020369 systemd[1]: Started sshd@6-172.31.26.188:22-147.75.109.163:59876.service - OpenSSH per-connection server daemon (147.75.109.163:59876). Nov 5 23:43:27.211929 sshd[2356]: Accepted publickey for core from 147.75.109.163 port 59876 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:43:27.214346 sshd-session[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:43:27.223960 systemd-logind[1881]: New session 7 of user core. Nov 5 23:43:27.233812 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 23:43:27.334649 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 23:43:27.335258 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 23:43:27.876112 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 23:43:27.895059 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 23:43:28.276989 dockerd[2377]: time="2025-11-05T23:43:28.276884670Z" level=info msg="Starting up" Nov 5 23:43:28.281711 dockerd[2377]: time="2025-11-05T23:43:28.281643126Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 23:43:28.302735 dockerd[2377]: time="2025-11-05T23:43:28.302665782Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 23:43:28.346444 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3882144854-merged.mount: Deactivated successfully. Nov 5 23:43:28.441634 dockerd[2377]: time="2025-11-05T23:43:28.441577446Z" level=info msg="Loading containers: start." Nov 5 23:43:28.456614 kernel: Initializing XFRM netlink socket Nov 5 23:43:28.792882 (udev-worker)[2401]: Network interface NamePolicy= disabled on kernel command line. Nov 5 23:43:28.861068 systemd-networkd[1825]: docker0: Link UP Nov 5 23:43:28.874733 dockerd[2377]: time="2025-11-05T23:43:28.874505673Z" level=info msg="Loading containers: done." Nov 5 23:43:28.900357 dockerd[2377]: time="2025-11-05T23:43:28.900279465Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 23:43:28.900620 dockerd[2377]: time="2025-11-05T23:43:28.900399117Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 23:43:28.902715 dockerd[2377]: time="2025-11-05T23:43:28.902655729Z" level=info msg="Initializing buildkit" Nov 5 23:43:28.942589 dockerd[2377]: time="2025-11-05T23:43:28.942495141Z" level=info msg="Completed buildkit initialization" Nov 5 23:43:28.959705 dockerd[2377]: time="2025-11-05T23:43:28.959625657Z" level=info msg="Daemon has completed initialization" Nov 5 23:43:28.960131 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 23:43:28.961144 dockerd[2377]: time="2025-11-05T23:43:28.960018477Z" level=info msg="API listen on /run/docker.sock" Nov 5 23:43:29.335465 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1688771705-merged.mount: Deactivated successfully. Nov 5 23:43:29.892421 systemd-resolved[1769]: Clock change detected. Flushing caches. Nov 5 23:43:30.424045 containerd[1978]: time="2025-11-05T23:43:30.423974462Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 23:43:31.049116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854208830.mount: Deactivated successfully. Nov 5 23:43:32.666834 containerd[1978]: time="2025-11-05T23:43:32.666749741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:32.669677 containerd[1978]: time="2025-11-05T23:43:32.668773517Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Nov 5 23:43:32.670802 containerd[1978]: time="2025-11-05T23:43:32.670744745Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:32.679313 containerd[1978]: time="2025-11-05T23:43:32.679252997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:32.681313 containerd[1978]: time="2025-11-05T23:43:32.681234977Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.257177115s" Nov 5 23:43:32.681313 containerd[1978]: time="2025-11-05T23:43:32.681310601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 5 23:43:32.684457 containerd[1978]: time="2025-11-05T23:43:32.684386213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 23:43:34.385110 containerd[1978]: time="2025-11-05T23:43:34.384994926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:34.387345 containerd[1978]: time="2025-11-05T23:43:34.387288966Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Nov 5 23:43:34.388405 containerd[1978]: time="2025-11-05T23:43:34.388308354Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:34.395527 containerd[1978]: time="2025-11-05T23:43:34.395414898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:34.398474 containerd[1978]: time="2025-11-05T23:43:34.397683318Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.713231177s" Nov 5 23:43:34.398474 containerd[1978]: time="2025-11-05T23:43:34.397763394Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 5 23:43:34.398927 containerd[1978]: time="2025-11-05T23:43:34.398853462Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 23:43:35.823330 containerd[1978]: time="2025-11-05T23:43:35.823270977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:35.826459 containerd[1978]: time="2025-11-05T23:43:35.826395957Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Nov 5 23:43:35.827961 containerd[1978]: time="2025-11-05T23:43:35.827834961Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:35.833638 containerd[1978]: time="2025-11-05T23:43:35.832184769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:35.834767 containerd[1978]: time="2025-11-05T23:43:35.834711417Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.435780579s" Nov 5 23:43:35.834937 containerd[1978]: time="2025-11-05T23:43:35.834906921Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 5 23:43:35.836570 containerd[1978]: time="2025-11-05T23:43:35.836526345Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 23:43:36.038207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 23:43:36.040978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:36.449038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:36.466513 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:43:36.596361 kubelet[2667]: E1105 23:43:36.596228 2667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:43:36.607897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:43:36.608272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:43:36.609024 systemd[1]: kubelet.service: Consumed 349ms CPU time, 105.5M memory peak. Nov 5 23:43:37.266187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849466353.mount: Deactivated successfully. Nov 5 23:43:37.902659 containerd[1978]: time="2025-11-05T23:43:37.902544419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:37.904819 containerd[1978]: time="2025-11-05T23:43:37.904755011Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Nov 5 23:43:37.906141 containerd[1978]: time="2025-11-05T23:43:37.906046475Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:37.909392 containerd[1978]: time="2025-11-05T23:43:37.909273047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:37.912173 containerd[1978]: time="2025-11-05T23:43:37.910810583Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 2.074018918s" Nov 5 23:43:37.912173 containerd[1978]: time="2025-11-05T23:43:37.910885547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 5 23:43:37.912567 containerd[1978]: time="2025-11-05T23:43:37.912373271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 23:43:38.532768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436713887.mount: Deactivated successfully. Nov 5 23:43:39.902744 containerd[1978]: time="2025-11-05T23:43:39.902651749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:39.904818 containerd[1978]: time="2025-11-05T23:43:39.904729777Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Nov 5 23:43:39.907281 containerd[1978]: time="2025-11-05T23:43:39.907178749Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:39.913660 containerd[1978]: time="2025-11-05T23:43:39.913475413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:39.916402 containerd[1978]: time="2025-11-05T23:43:39.915974845Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.003540662s" Nov 5 23:43:39.916402 containerd[1978]: time="2025-11-05T23:43:39.916055221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 5 23:43:39.917087 containerd[1978]: time="2025-11-05T23:43:39.916994785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 23:43:40.408978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017441027.mount: Deactivated successfully. Nov 5 23:43:40.423256 containerd[1978]: time="2025-11-05T23:43:40.423145068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:43:40.425232 containerd[1978]: time="2025-11-05T23:43:40.425130852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 5 23:43:40.428373 containerd[1978]: time="2025-11-05T23:43:40.428241960Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:43:40.436102 containerd[1978]: time="2025-11-05T23:43:40.434996568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 23:43:40.436675 containerd[1978]: time="2025-11-05T23:43:40.436628292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 519.542055ms" Nov 5 23:43:40.436803 containerd[1978]: time="2025-11-05T23:43:40.436776108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 23:43:40.437822 containerd[1978]: time="2025-11-05T23:43:40.437705796Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 23:43:40.969723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015549194.mount: Deactivated successfully. Nov 5 23:43:43.245034 containerd[1978]: time="2025-11-05T23:43:43.244927562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:43.247191 containerd[1978]: time="2025-11-05T23:43:43.247097318Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Nov 5 23:43:43.249981 containerd[1978]: time="2025-11-05T23:43:43.249863270Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:43.258824 containerd[1978]: time="2025-11-05T23:43:43.258740954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:43:43.261654 containerd[1978]: time="2025-11-05T23:43:43.261354998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.823358334s" Nov 5 23:43:43.261654 containerd[1978]: time="2025-11-05T23:43:43.261428774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 5 23:43:46.788335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 23:43:46.792980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:47.178861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:47.195389 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 23:43:47.268415 kubelet[2823]: E1105 23:43:47.268334 2823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 23:43:47.272542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 23:43:47.273690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 23:43:47.275750 systemd[1]: kubelet.service: Consumed 325ms CPU time, 107M memory peak. Nov 5 23:43:52.816366 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:52.817005 systemd[1]: kubelet.service: Consumed 325ms CPU time, 107M memory peak. Nov 5 23:43:52.822344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:52.879526 systemd[1]: Reload requested from client PID 2837 ('systemctl') (unit session-7.scope)... Nov 5 23:43:52.879561 systemd[1]: Reloading... Nov 5 23:43:53.143624 zram_generator::config[2888]: No configuration found. Nov 5 23:43:53.586192 systemd[1]: Reloading finished in 705 ms. Nov 5 23:43:53.640931 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 23:43:53.718261 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 23:43:53.718673 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 23:43:53.719328 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:53.719551 systemd[1]: kubelet.service: Consumed 227ms CPU time, 95M memory peak. Nov 5 23:43:53.724012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:43:54.548072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:43:54.563157 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:43:54.635714 kubelet[2949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:43:54.635714 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:43:54.635714 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:43:54.636238 kubelet[2949]: I1105 23:43:54.635803 2949 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:43:57.738645 kubelet[2949]: I1105 23:43:57.737125 2949 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 23:43:57.738645 kubelet[2949]: I1105 23:43:57.737171 2949 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:43:57.738645 kubelet[2949]: I1105 23:43:57.737858 2949 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:43:57.789192 kubelet[2949]: E1105 23:43:57.789117 2949 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 23:43:57.790097 kubelet[2949]: I1105 23:43:57.790066 2949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:43:57.803563 kubelet[2949]: I1105 23:43:57.803503 2949 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:43:57.809523 kubelet[2949]: I1105 23:43:57.809470 2949 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 23:43:57.810267 kubelet[2949]: I1105 23:43:57.810204 2949 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:43:57.810529 kubelet[2949]: I1105 23:43:57.810255 2949 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:43:57.810742 kubelet[2949]: I1105 23:43:57.810681 2949 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:43:57.810742 kubelet[2949]: I1105 23:43:57.810704 2949 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 23:43:57.812339 kubelet[2949]: I1105 23:43:57.812287 2949 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:43:57.817872 kubelet[2949]: I1105 23:43:57.817828 2949 kubelet.go:480] "Attempting to sync node with API server" Nov 5 23:43:57.817872 kubelet[2949]: I1105 23:43:57.817874 2949 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:43:57.818053 kubelet[2949]: I1105 23:43:57.817921 2949 kubelet.go:386] "Adding apiserver pod source" Nov 5 23:43:57.820398 kubelet[2949]: I1105 23:43:57.820026 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:43:57.823882 kubelet[2949]: E1105 23:43:57.823826 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-188&limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:43:57.824541 kubelet[2949]: I1105 23:43:57.824511 2949 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:43:57.825922 kubelet[2949]: I1105 23:43:57.825885 2949 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:43:57.826288 kubelet[2949]: W1105 23:43:57.826268 2949 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 23:43:57.833272 kubelet[2949]: I1105 23:43:57.832907 2949 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 23:43:57.833272 kubelet[2949]: I1105 23:43:57.832971 2949 server.go:1289] "Started kubelet" Nov 5 23:43:57.839039 kubelet[2949]: E1105 23:43:57.838962 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:43:57.842489 kubelet[2949]: I1105 23:43:57.842452 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:43:57.854806 kubelet[2949]: I1105 23:43:57.854739 2949 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:43:57.857781 kubelet[2949]: E1105 23:43:57.849179 2949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.188:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-188.187540f21f11aba2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-188,UID:ip-172-31-26-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-188,},FirstTimestamp:2025-11-05 23:43:57.832932258 +0000 UTC m=+3.263293385,LastTimestamp:2025-11-05 23:43:57.832932258 +0000 UTC m=+3.263293385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-188,}" Nov 5 23:43:57.861661 kubelet[2949]: I1105 23:43:57.861532 2949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:43:57.865147 kubelet[2949]: I1105 23:43:57.865112 2949 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 23:43:57.866986 kubelet[2949]: E1105 23:43:57.865886 2949 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-188\" not found" Nov 5 23:43:57.867392 kubelet[2949]: I1105 23:43:57.867364 2949 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 23:43:57.867569 kubelet[2949]: I1105 23:43:57.867550 2949 reconciler.go:26] "Reconciler: start to sync state" Nov 5 23:43:57.871089 kubelet[2949]: E1105 23:43:57.871021 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:43:57.871231 kubelet[2949]: E1105 23:43:57.871178 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-188?timeout=10s\": dial tcp 172.31.26.188:6443: connect: connection refused" interval="200ms" Nov 5 23:43:57.871563 kubelet[2949]: I1105 23:43:57.871514 2949 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:43:57.871748 kubelet[2949]: I1105 23:43:57.871706 2949 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:43:57.873941 kubelet[2949]: I1105 23:43:57.873821 2949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:43:57.875046 kubelet[2949]: I1105 23:43:57.874762 2949 server.go:317] "Adding debug handlers to kubelet server" Nov 5 23:43:57.875634 kubelet[2949]: I1105 23:43:57.875542 2949 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:43:57.882099 kubelet[2949]: I1105 23:43:57.875942 2949 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:43:57.882392 kubelet[2949]: E1105 23:43:57.876068 2949 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:43:57.909450 kubelet[2949]: I1105 23:43:57.909419 2949 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:43:57.909735 kubelet[2949]: I1105 23:43:57.909714 2949 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:43:57.909881 kubelet[2949]: I1105 23:43:57.909853 2949 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:43:57.910132 kubelet[2949]: I1105 23:43:57.909775 2949 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 23:43:57.913449 kubelet[2949]: I1105 23:43:57.913079 2949 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 23:43:57.913449 kubelet[2949]: I1105 23:43:57.913445 2949 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 23:43:57.915390 kubelet[2949]: I1105 23:43:57.913481 2949 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:43:57.915390 kubelet[2949]: I1105 23:43:57.913804 2949 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 23:43:57.915390 kubelet[2949]: E1105 23:43:57.913921 2949 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:43:57.918822 kubelet[2949]: I1105 23:43:57.918770 2949 policy_none.go:49] "None policy: Start" Nov 5 23:43:57.918822 kubelet[2949]: I1105 23:43:57.918816 2949 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 23:43:57.919023 kubelet[2949]: I1105 23:43:57.918841 2949 state_mem.go:35] "Initializing new in-memory state store" Nov 5 23:43:57.920752 kubelet[2949]: E1105 23:43:57.920435 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:43:57.933416 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 23:43:57.948991 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 23:43:57.957096 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 23:43:57.966319 kubelet[2949]: E1105 23:43:57.966267 2949 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-188\" not found" Nov 5 23:43:57.968650 kubelet[2949]: E1105 23:43:57.968608 2949 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:43:57.968932 kubelet[2949]: I1105 23:43:57.968903 2949 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:43:57.969014 kubelet[2949]: I1105 23:43:57.968934 2949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:43:57.969397 kubelet[2949]: I1105 23:43:57.969367 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:43:57.973568 kubelet[2949]: E1105 23:43:57.973499 2949 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:43:57.973738 kubelet[2949]: E1105 23:43:57.973603 2949 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-188\" not found" Nov 5 23:43:58.042572 systemd[1]: Created slice kubepods-burstable-podcfda9020b61ffcadc5a95a24f8e29d42.slice - libcontainer container kubepods-burstable-podcfda9020b61ffcadc5a95a24f8e29d42.slice. Nov 5 23:43:58.055240 kubelet[2949]: E1105 23:43:58.054653 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:43:58.059637 systemd[1]: Created slice kubepods-burstable-poddeb2a269803dc6eccc49255969f2eef1.slice - libcontainer container kubepods-burstable-poddeb2a269803dc6eccc49255969f2eef1.slice. Nov 5 23:43:58.065508 kubelet[2949]: E1105 23:43:58.065124 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:43:58.070621 systemd[1]: Created slice kubepods-burstable-podfe412c00665911fab76f28933a16dd07.slice - libcontainer container kubepods-burstable-podfe412c00665911fab76f28933a16dd07.slice. Nov 5 23:43:58.073359 kubelet[2949]: E1105 23:43:58.073027 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-188?timeout=10s\": dial tcp 172.31.26.188:6443: connect: connection refused" interval="400ms" Nov 5 23:43:58.073566 kubelet[2949]: I1105 23:43:58.073508 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-188" Nov 5 23:43:58.076285 kubelet[2949]: E1105 23:43:58.075368 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.188:6443/api/v1/nodes\": dial tcp 172.31.26.188:6443: connect: connection refused" node="ip-172-31-26-188" Nov 5 23:43:58.076285 kubelet[2949]: E1105 23:43:58.075958 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:43:58.168221 kubelet[2949]: I1105 23:43:58.168159 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:43:58.168352 kubelet[2949]: I1105 23:43:58.168230 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:43:58.168352 kubelet[2949]: I1105 23:43:58.168275 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:43:58.168352 kubelet[2949]: I1105 23:43:58.168315 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deb2a269803dc6eccc49255969f2eef1-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-188\" (UID: \"deb2a269803dc6eccc49255969f2eef1\") " pod="kube-system/kube-scheduler-ip-172-31-26-188" Nov 5 23:43:58.168540 kubelet[2949]: I1105 23:43:58.168350 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe412c00665911fab76f28933a16dd07-ca-certs\") pod \"kube-apiserver-ip-172-31-26-188\" (UID: \"fe412c00665911fab76f28933a16dd07\") " pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:43:58.168540 kubelet[2949]: I1105 23:43:58.168388 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe412c00665911fab76f28933a16dd07-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-188\" (UID: \"fe412c00665911fab76f28933a16dd07\") " pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:43:58.168540 kubelet[2949]: I1105 23:43:58.168422 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe412c00665911fab76f28933a16dd07-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-188\" (UID: \"fe412c00665911fab76f28933a16dd07\") " pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:43:58.168540 kubelet[2949]: I1105 23:43:58.168456 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:43:58.168540 kubelet[2949]: I1105 23:43:58.168493 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:43:58.278284 kubelet[2949]: I1105 23:43:58.278237 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-188" Nov 5 23:43:58.278809 kubelet[2949]: E1105 23:43:58.278759 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.188:6443/api/v1/nodes\": dial tcp 172.31.26.188:6443: connect: connection refused" node="ip-172-31-26-188" Nov 5 23:43:58.356580 containerd[1978]: time="2025-11-05T23:43:58.356151233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-188,Uid:cfda9020b61ffcadc5a95a24f8e29d42,Namespace:kube-system,Attempt:0,}" Nov 5 23:43:58.367169 containerd[1978]: time="2025-11-05T23:43:58.366817817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-188,Uid:deb2a269803dc6eccc49255969f2eef1,Namespace:kube-system,Attempt:0,}" Nov 5 23:43:58.378325 containerd[1978]: time="2025-11-05T23:43:58.378268625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-188,Uid:fe412c00665911fab76f28933a16dd07,Namespace:kube-system,Attempt:0,}" Nov 5 23:43:58.412493 containerd[1978]: time="2025-11-05T23:43:58.412418585Z" level=info msg="connecting to shim 8f6c0de6f43c33410c6b4e32e94d3ce1edacbd255be09bc65cfba77cc44e2827" address="unix:///run/containerd/s/2794aa968ae1ea4af4756187f55550354f7e3f2b95a14633328ab41f766e53db" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:58.458892 containerd[1978]: time="2025-11-05T23:43:58.458646857Z" level=info msg="connecting to shim baca83dbf60e53536df44f4980f13576f024b6fa0fe8d6056006bcea2130d52b" address="unix:///run/containerd/s/3b301bedd5cf6d03bdf3aa88ebf5e9276e0c9fe5dc4c9839cf5cfe4337edb652" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:58.476028 containerd[1978]: time="2025-11-05T23:43:58.475807289Z" level=info msg="connecting to shim 6a81e6e4cd3697b795e8319f55635fe8ba136cec4603f8a71ff7fbea25b5735d" address="unix:///run/containerd/s/6c1691e6ad071b40df87a100722e5f589fa8efe08506a64fc25ae397a26f7d30" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:43:58.476498 kubelet[2949]: E1105 23:43:58.476403 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-188?timeout=10s\": dial tcp 172.31.26.188:6443: connect: connection refused" interval="800ms" Nov 5 23:43:58.518912 systemd[1]: Started cri-containerd-8f6c0de6f43c33410c6b4e32e94d3ce1edacbd255be09bc65cfba77cc44e2827.scope - libcontainer container 8f6c0de6f43c33410c6b4e32e94d3ce1edacbd255be09bc65cfba77cc44e2827. Nov 5 23:43:58.559955 systemd[1]: Started cri-containerd-baca83dbf60e53536df44f4980f13576f024b6fa0fe8d6056006bcea2130d52b.scope - libcontainer container baca83dbf60e53536df44f4980f13576f024b6fa0fe8d6056006bcea2130d52b. Nov 5 23:43:58.593901 systemd[1]: Started cri-containerd-6a81e6e4cd3697b795e8319f55635fe8ba136cec4603f8a71ff7fbea25b5735d.scope - libcontainer container 6a81e6e4cd3697b795e8319f55635fe8ba136cec4603f8a71ff7fbea25b5735d. Nov 5 23:43:58.668359 containerd[1978]: time="2025-11-05T23:43:58.668179770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-188,Uid:cfda9020b61ffcadc5a95a24f8e29d42,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f6c0de6f43c33410c6b4e32e94d3ce1edacbd255be09bc65cfba77cc44e2827\"" Nov 5 23:43:58.685521 kubelet[2949]: I1105 23:43:58.685460 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-188" Nov 5 23:43:58.687537 kubelet[2949]: E1105 23:43:58.687445 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.188:6443/api/v1/nodes\": dial tcp 172.31.26.188:6443: connect: connection refused" node="ip-172-31-26-188" Nov 5 23:43:58.692096 containerd[1978]: time="2025-11-05T23:43:58.691870915Z" level=info msg="CreateContainer within sandbox \"8f6c0de6f43c33410c6b4e32e94d3ce1edacbd255be09bc65cfba77cc44e2827\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 23:43:58.721728 containerd[1978]: time="2025-11-05T23:43:58.721306591Z" level=info msg="Container 4cc8b51e551dd63a354aa7ec2a946ddd28202b4700b894935648f63b3aec6b1c: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:58.757360 containerd[1978]: time="2025-11-05T23:43:58.757305139Z" level=info msg="CreateContainer within sandbox \"8f6c0de6f43c33410c6b4e32e94d3ce1edacbd255be09bc65cfba77cc44e2827\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4cc8b51e551dd63a354aa7ec2a946ddd28202b4700b894935648f63b3aec6b1c\"" Nov 5 23:43:58.758889 kubelet[2949]: E1105 23:43:58.758802 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 23:43:58.761128 containerd[1978]: time="2025-11-05T23:43:58.760334851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-188,Uid:deb2a269803dc6eccc49255969f2eef1,Namespace:kube-system,Attempt:0,} returns sandbox id \"baca83dbf60e53536df44f4980f13576f024b6fa0fe8d6056006bcea2130d52b\"" Nov 5 23:43:58.761564 containerd[1978]: time="2025-11-05T23:43:58.761474635Z" level=info msg="StartContainer for \"4cc8b51e551dd63a354aa7ec2a946ddd28202b4700b894935648f63b3aec6b1c\"" Nov 5 23:43:58.765996 containerd[1978]: time="2025-11-05T23:43:58.765886111Z" level=info msg="connecting to shim 4cc8b51e551dd63a354aa7ec2a946ddd28202b4700b894935648f63b3aec6b1c" address="unix:///run/containerd/s/2794aa968ae1ea4af4756187f55550354f7e3f2b95a14633328ab41f766e53db" protocol=ttrpc version=3 Nov 5 23:43:58.777394 containerd[1978]: time="2025-11-05T23:43:58.777306727Z" level=info msg="CreateContainer within sandbox \"baca83dbf60e53536df44f4980f13576f024b6fa0fe8d6056006bcea2130d52b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 23:43:58.796793 containerd[1978]: time="2025-11-05T23:43:58.796719259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-188,Uid:fe412c00665911fab76f28933a16dd07,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a81e6e4cd3697b795e8319f55635fe8ba136cec4603f8a71ff7fbea25b5735d\"" Nov 5 23:43:58.807864 containerd[1978]: time="2025-11-05T23:43:58.807802111Z" level=info msg="Container ecbea0e4a6281d403c1e11d130f6d8a5555697cdfa919db17e19d458200287ab: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:58.810922 containerd[1978]: time="2025-11-05T23:43:58.810820939Z" level=info msg="CreateContainer within sandbox \"6a81e6e4cd3697b795e8319f55635fe8ba136cec4603f8a71ff7fbea25b5735d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 23:43:58.824244 systemd[1]: Started cri-containerd-4cc8b51e551dd63a354aa7ec2a946ddd28202b4700b894935648f63b3aec6b1c.scope - libcontainer container 4cc8b51e551dd63a354aa7ec2a946ddd28202b4700b894935648f63b3aec6b1c. Nov 5 23:43:58.834764 containerd[1978]: time="2025-11-05T23:43:58.834166603Z" level=info msg="CreateContainer within sandbox \"baca83dbf60e53536df44f4980f13576f024b6fa0fe8d6056006bcea2130d52b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ecbea0e4a6281d403c1e11d130f6d8a5555697cdfa919db17e19d458200287ab\"" Nov 5 23:43:58.838340 containerd[1978]: time="2025-11-05T23:43:58.838205515Z" level=info msg="StartContainer for \"ecbea0e4a6281d403c1e11d130f6d8a5555697cdfa919db17e19d458200287ab\"" Nov 5 23:43:58.842202 containerd[1978]: time="2025-11-05T23:43:58.842134195Z" level=info msg="connecting to shim ecbea0e4a6281d403c1e11d130f6d8a5555697cdfa919db17e19d458200287ab" address="unix:///run/containerd/s/3b301bedd5cf6d03bdf3aa88ebf5e9276e0c9fe5dc4c9839cf5cfe4337edb652" protocol=ttrpc version=3 Nov 5 23:43:58.847863 containerd[1978]: time="2025-11-05T23:43:58.847790071Z" level=info msg="Container 3031441de6ba748699a1a29a6ad7854d72ea4c549c58be48bf0801538d48760c: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:43:58.872500 containerd[1978]: time="2025-11-05T23:43:58.872095027Z" level=info msg="CreateContainer within sandbox \"6a81e6e4cd3697b795e8319f55635fe8ba136cec4603f8a71ff7fbea25b5735d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3031441de6ba748699a1a29a6ad7854d72ea4c549c58be48bf0801538d48760c\"" Nov 5 23:43:58.874470 containerd[1978]: time="2025-11-05T23:43:58.874397239Z" level=info msg="StartContainer for \"3031441de6ba748699a1a29a6ad7854d72ea4c549c58be48bf0801538d48760c\"" Nov 5 23:43:58.883151 containerd[1978]: time="2025-11-05T23:43:58.883081267Z" level=info msg="connecting to shim 3031441de6ba748699a1a29a6ad7854d72ea4c549c58be48bf0801538d48760c" address="unix:///run/containerd/s/6c1691e6ad071b40df87a100722e5f589fa8efe08506a64fc25ae397a26f7d30" protocol=ttrpc version=3 Nov 5 23:43:58.892910 systemd[1]: Started cri-containerd-ecbea0e4a6281d403c1e11d130f6d8a5555697cdfa919db17e19d458200287ab.scope - libcontainer container ecbea0e4a6281d403c1e11d130f6d8a5555697cdfa919db17e19d458200287ab. Nov 5 23:43:58.947204 systemd[1]: Started cri-containerd-3031441de6ba748699a1a29a6ad7854d72ea4c549c58be48bf0801538d48760c.scope - libcontainer container 3031441de6ba748699a1a29a6ad7854d72ea4c549c58be48bf0801538d48760c. Nov 5 23:43:58.991953 kubelet[2949]: E1105 23:43:58.991732 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 23:43:59.012392 containerd[1978]: time="2025-11-05T23:43:59.012320008Z" level=info msg="StartContainer for \"4cc8b51e551dd63a354aa7ec2a946ddd28202b4700b894935648f63b3aec6b1c\" returns successfully" Nov 5 23:43:59.108459 containerd[1978]: time="2025-11-05T23:43:59.108386885Z" level=info msg="StartContainer for \"ecbea0e4a6281d403c1e11d130f6d8a5555697cdfa919db17e19d458200287ab\" returns successfully" Nov 5 23:43:59.117158 containerd[1978]: time="2025-11-05T23:43:59.117034229Z" level=info msg="StartContainer for \"3031441de6ba748699a1a29a6ad7854d72ea4c549c58be48bf0801538d48760c\" returns successfully" Nov 5 23:43:59.161343 kubelet[2949]: E1105 23:43:59.161061 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-188&limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 23:43:59.278874 kubelet[2949]: E1105 23:43:59.278699 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-188?timeout=10s\": dial tcp 172.31.26.188:6443: connect: connection refused" interval="1.6s" Nov 5 23:43:59.406920 kubelet[2949]: E1105 23:43:59.405193 2949 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.188:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 23:43:59.490993 kubelet[2949]: I1105 23:43:59.490862 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-188" Nov 5 23:43:59.994011 kubelet[2949]: E1105 23:43:59.993938 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:00.004880 kubelet[2949]: E1105 23:44:00.004801 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:00.014368 kubelet[2949]: E1105 23:44:00.014052 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:01.016175 kubelet[2949]: E1105 23:44:01.016081 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:01.017075 kubelet[2949]: E1105 23:44:01.016723 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:01.018958 kubelet[2949]: E1105 23:44:01.018734 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:02.021715 kubelet[2949]: E1105 23:44:02.020330 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:02.024056 kubelet[2949]: E1105 23:44:02.023781 2949 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:04.243624 kubelet[2949]: E1105 23:44:04.243540 2949 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-188\" not found" node="ip-172-31-26-188" Nov 5 23:44:04.315203 kubelet[2949]: I1105 23:44:04.315000 2949 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-188" Nov 5 23:44:04.315203 kubelet[2949]: E1105 23:44:04.315062 2949 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-188\": node \"ip-172-31-26-188\" not found" Nov 5 23:44:04.367432 kubelet[2949]: I1105 23:44:04.367367 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:04.418218 kubelet[2949]: E1105 23:44:04.418155 2949 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:04.418218 kubelet[2949]: I1105 23:44:04.418207 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-188" Nov 5 23:44:04.427894 kubelet[2949]: E1105 23:44:04.427805 2949 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-188" Nov 5 23:44:04.427894 kubelet[2949]: I1105 23:44:04.427852 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:04.434502 kubelet[2949]: E1105 23:44:04.434441 2949 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-188\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:04.836659 kubelet[2949]: I1105 23:44:04.836602 2949 apiserver.go:52] "Watching apiserver" Nov 5 23:44:04.867879 kubelet[2949]: I1105 23:44:04.867813 2949 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 23:44:05.121232 update_engine[1882]: I20251105 23:44:05.120522 1882 update_attempter.cc:509] Updating boot flags... Nov 5 23:44:05.186476 kubelet[2949]: I1105 23:44:05.186391 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-188" Nov 5 23:44:07.057443 systemd[1]: Reload requested from client PID 3412 ('systemctl') (unit session-7.scope)... Nov 5 23:44:07.058016 systemd[1]: Reloading... Nov 5 23:44:07.146663 kubelet[2949]: I1105 23:44:07.145864 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:07.201874 kubelet[2949]: I1105 23:44:07.200395 2949 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:07.321644 zram_generator::config[3462]: No configuration found. Nov 5 23:44:07.907439 systemd[1]: Reloading finished in 848 ms. Nov 5 23:44:07.971905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:44:07.996290 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 23:44:07.996887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:44:07.996990 systemd[1]: kubelet.service: Consumed 4.186s CPU time, 126.8M memory peak. Nov 5 23:44:08.007052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 23:44:08.425761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 23:44:08.443212 (kubelet)[3516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 23:44:08.536359 kubelet[3516]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:44:08.536359 kubelet[3516]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 23:44:08.536359 kubelet[3516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 23:44:08.536980 kubelet[3516]: I1105 23:44:08.536493 3516 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 23:44:08.553034 kubelet[3516]: I1105 23:44:08.552970 3516 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 23:44:08.553034 kubelet[3516]: I1105 23:44:08.553019 3516 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 23:44:08.553459 kubelet[3516]: I1105 23:44:08.553409 3516 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 23:44:08.557692 kubelet[3516]: I1105 23:44:08.557634 3516 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 23:44:08.567636 kubelet[3516]: I1105 23:44:08.567485 3516 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 23:44:08.586398 kubelet[3516]: I1105 23:44:08.584388 3516 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 23:44:08.611423 kubelet[3516]: I1105 23:44:08.609790 3516 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 23:44:08.612066 kubelet[3516]: I1105 23:44:08.611969 3516 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 23:44:08.612503 kubelet[3516]: I1105 23:44:08.612043 3516 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 23:44:08.612747 kubelet[3516]: I1105 23:44:08.612518 3516 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 23:44:08.612747 kubelet[3516]: I1105 23:44:08.612549 3516 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 23:44:08.614656 kubelet[3516]: I1105 23:44:08.613492 3516 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:44:08.614656 kubelet[3516]: I1105 23:44:08.613849 3516 kubelet.go:480] "Attempting to sync node with API server" Nov 5 23:44:08.614656 kubelet[3516]: I1105 23:44:08.613878 3516 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 23:44:08.614656 kubelet[3516]: I1105 23:44:08.613926 3516 kubelet.go:386] "Adding apiserver pod source" Nov 5 23:44:08.614656 kubelet[3516]: I1105 23:44:08.613957 3516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 23:44:08.616338 kubelet[3516]: I1105 23:44:08.616267 3516 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 23:44:08.618269 kubelet[3516]: I1105 23:44:08.618221 3516 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 23:44:08.624145 kubelet[3516]: I1105 23:44:08.624089 3516 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 23:44:08.624399 kubelet[3516]: I1105 23:44:08.624375 3516 server.go:1289] "Started kubelet" Nov 5 23:44:08.635614 kubelet[3516]: I1105 23:44:08.632998 3516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 23:44:08.644630 kubelet[3516]: I1105 23:44:08.644536 3516 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 23:44:08.665010 kubelet[3516]: I1105 23:44:08.664924 3516 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 23:44:08.667470 kubelet[3516]: E1105 23:44:08.667291 3516 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-188\" not found" Nov 5 23:44:08.669400 kubelet[3516]: I1105 23:44:08.669341 3516 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 23:44:08.669400 kubelet[3516]: I1105 23:44:08.660566 3516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 23:44:08.670324 kubelet[3516]: I1105 23:44:08.669816 3516 reconciler.go:26] "Reconciler: start to sync state" Nov 5 23:44:08.676819 kubelet[3516]: I1105 23:44:08.645880 3516 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 23:44:08.677106 kubelet[3516]: I1105 23:44:08.677049 3516 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 23:44:08.709031 kubelet[3516]: I1105 23:44:08.708873 3516 server.go:317] "Adding debug handlers to kubelet server" Nov 5 23:44:08.756797 kubelet[3516]: I1105 23:44:08.756730 3516 factory.go:223] Registration of the systemd container factory successfully Nov 5 23:44:08.757489 kubelet[3516]: I1105 23:44:08.757310 3516 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 23:44:08.771195 kubelet[3516]: E1105 23:44:08.771115 3516 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 23:44:08.774122 kubelet[3516]: I1105 23:44:08.774053 3516 factory.go:223] Registration of the containerd container factory successfully Nov 5 23:44:08.823991 kubelet[3516]: I1105 23:44:08.822825 3516 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 23:44:08.827920 kubelet[3516]: I1105 23:44:08.827662 3516 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 23:44:08.830169 kubelet[3516]: I1105 23:44:08.827871 3516 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 23:44:08.830860 kubelet[3516]: I1105 23:44:08.830569 3516 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 23:44:08.830860 kubelet[3516]: I1105 23:44:08.830659 3516 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 23:44:08.833989 kubelet[3516]: E1105 23:44:08.833832 3516 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 23:44:08.934156 kubelet[3516]: E1105 23:44:08.934003 3516 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 23:44:08.947074 kubelet[3516]: I1105 23:44:08.946898 3516 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 23:44:08.947612 kubelet[3516]: I1105 23:44:08.947521 3516 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 23:44:08.947612 kubelet[3516]: I1105 23:44:08.947574 3516 state_mem.go:36] "Initialized new in-memory state store" Nov 5 23:44:08.948087 kubelet[3516]: I1105 23:44:08.948052 3516 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 23:44:08.948254 kubelet[3516]: I1105 23:44:08.948211 3516 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 23:44:08.948369 kubelet[3516]: I1105 23:44:08.948351 3516 policy_none.go:49] "None policy: Start" Nov 5 23:44:08.948479 kubelet[3516]: I1105 23:44:08.948460 3516 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 23:44:08.948625 kubelet[3516]: I1105 23:44:08.948582 3516 state_mem.go:35] "Initializing new in-memory state store" Nov 5 23:44:08.948952 kubelet[3516]: I1105 23:44:08.948928 3516 state_mem.go:75] "Updated machine memory state" Nov 5 23:44:08.961468 kubelet[3516]: E1105 23:44:08.961373 3516 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 23:44:08.967382 kubelet[3516]: I1105 23:44:08.967343 3516 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 23:44:08.967675 kubelet[3516]: I1105 23:44:08.967580 3516 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 23:44:08.968551 kubelet[3516]: I1105 23:44:08.968282 3516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 23:44:08.977008 kubelet[3516]: E1105 23:44:08.976930 3516 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 23:44:09.101222 kubelet[3516]: I1105 23:44:09.101151 3516 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-188" Nov 5 23:44:09.120555 kubelet[3516]: I1105 23:44:09.120492 3516 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-188" Nov 5 23:44:09.120788 kubelet[3516]: I1105 23:44:09.120686 3516 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-188" Nov 5 23:44:09.137677 kubelet[3516]: I1105 23:44:09.136521 3516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.140023 kubelet[3516]: I1105 23:44:09.137398 3516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-188" Nov 5 23:44:09.140911 kubelet[3516]: I1105 23:44:09.140761 3516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:09.167434 kubelet[3516]: E1105 23:44:09.167227 3516 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-188\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-188" Nov 5 23:44:09.169057 kubelet[3516]: E1105 23:44:09.167836 3516 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-188\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.170893 kubelet[3516]: E1105 23:44:09.170821 3516 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-188\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:09.175006 kubelet[3516]: I1105 23:44:09.174858 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe412c00665911fab76f28933a16dd07-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-188\" (UID: \"fe412c00665911fab76f28933a16dd07\") " pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:09.175418 kubelet[3516]: I1105 23:44:09.175334 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.175944 kubelet[3516]: I1105 23:44:09.175835 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.176616 kubelet[3516]: I1105 23:44:09.176468 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.176961 kubelet[3516]: I1105 23:44:09.176816 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deb2a269803dc6eccc49255969f2eef1-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-188\" (UID: \"deb2a269803dc6eccc49255969f2eef1\") " pod="kube-system/kube-scheduler-ip-172-31-26-188" Nov 5 23:44:09.177348 kubelet[3516]: I1105 23:44:09.177197 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe412c00665911fab76f28933a16dd07-ca-certs\") pod \"kube-apiserver-ip-172-31-26-188\" (UID: \"fe412c00665911fab76f28933a16dd07\") " pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:09.178032 kubelet[3516]: I1105 23:44:09.177688 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe412c00665911fab76f28933a16dd07-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-188\" (UID: \"fe412c00665911fab76f28933a16dd07\") " pod="kube-system/kube-apiserver-ip-172-31-26-188" Nov 5 23:44:09.179134 kubelet[3516]: I1105 23:44:09.178288 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.179459 kubelet[3516]: I1105 23:44:09.179336 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cfda9020b61ffcadc5a95a24f8e29d42-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-188\" (UID: \"cfda9020b61ffcadc5a95a24f8e29d42\") " pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.634721 kubelet[3516]: I1105 23:44:09.634653 3516 apiserver.go:52] "Watching apiserver" Nov 5 23:44:09.669903 kubelet[3516]: I1105 23:44:09.669832 3516 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 23:44:09.885141 kubelet[3516]: I1105 23:44:09.884774 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-188" podStartSLOduration=2.884722938 podStartE2EDuration="2.884722938s" podCreationTimestamp="2025-11-05 23:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:44:09.848531874 +0000 UTC m=+1.395080564" watchObservedRunningTime="2025-11-05 23:44:09.884722938 +0000 UTC m=+1.431271616" Nov 5 23:44:09.898475 kubelet[3516]: I1105 23:44:09.897960 3516 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.918158 kubelet[3516]: E1105 23:44:09.917796 3516 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-188\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-26-188" Nov 5 23:44:09.926700 kubelet[3516]: I1105 23:44:09.926342 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-188" podStartSLOduration=4.926200326 podStartE2EDuration="4.926200326s" podCreationTimestamp="2025-11-05 23:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:44:09.889282218 +0000 UTC m=+1.435830896" watchObservedRunningTime="2025-11-05 23:44:09.926200326 +0000 UTC m=+1.472749016" Nov 5 23:44:09.951858 kubelet[3516]: I1105 23:44:09.950817 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-188" podStartSLOduration=2.950775894 podStartE2EDuration="2.950775894s" podCreationTimestamp="2025-11-05 23:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:44:09.92829009 +0000 UTC m=+1.474838780" watchObservedRunningTime="2025-11-05 23:44:09.950775894 +0000 UTC m=+1.497324560" Nov 5 23:44:12.123930 kubelet[3516]: I1105 23:44:12.123825 3516 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 23:44:12.125317 kubelet[3516]: I1105 23:44:12.125068 3516 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 23:44:12.125379 containerd[1978]: time="2025-11-05T23:44:12.124664969Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 23:44:13.169310 systemd[1]: Created slice kubepods-besteffort-pod0b0f11be_364d_4d99_823a_019db2111dfd.slice - libcontainer container kubepods-besteffort-pod0b0f11be_364d_4d99_823a_019db2111dfd.slice. Nov 5 23:44:13.210859 kubelet[3516]: I1105 23:44:13.210810 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b0f11be-364d-4d99-823a-019db2111dfd-xtables-lock\") pod \"kube-proxy-245zw\" (UID: \"0b0f11be-364d-4d99-823a-019db2111dfd\") " pod="kube-system/kube-proxy-245zw" Nov 5 23:44:13.211781 kubelet[3516]: I1105 23:44:13.211512 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2htv5\" (UniqueName: \"kubernetes.io/projected/0b0f11be-364d-4d99-823a-019db2111dfd-kube-api-access-2htv5\") pod \"kube-proxy-245zw\" (UID: \"0b0f11be-364d-4d99-823a-019db2111dfd\") " pod="kube-system/kube-proxy-245zw" Nov 5 23:44:13.211781 kubelet[3516]: I1105 23:44:13.211627 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b0f11be-364d-4d99-823a-019db2111dfd-kube-proxy\") pod \"kube-proxy-245zw\" (UID: \"0b0f11be-364d-4d99-823a-019db2111dfd\") " pod="kube-system/kube-proxy-245zw" Nov 5 23:44:13.211781 kubelet[3516]: I1105 23:44:13.211671 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b0f11be-364d-4d99-823a-019db2111dfd-lib-modules\") pod \"kube-proxy-245zw\" (UID: \"0b0f11be-364d-4d99-823a-019db2111dfd\") " pod="kube-system/kube-proxy-245zw" Nov 5 23:44:13.376396 systemd[1]: Created slice kubepods-besteffort-pod8e5b9088_c572_4869_b570_6a91c2cc66b3.slice - libcontainer container kubepods-besteffort-pod8e5b9088_c572_4869_b570_6a91c2cc66b3.slice. Nov 5 23:44:13.414341 kubelet[3516]: I1105 23:44:13.414286 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8e5b9088-c572-4869-b570-6a91c2cc66b3-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mlbcn\" (UID: \"8e5b9088-c572-4869-b570-6a91c2cc66b3\") " pod="tigera-operator/tigera-operator-7dcd859c48-mlbcn" Nov 5 23:44:13.414714 kubelet[3516]: I1105 23:44:13.414671 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4tb\" (UniqueName: \"kubernetes.io/projected/8e5b9088-c572-4869-b570-6a91c2cc66b3-kube-api-access-7p4tb\") pod \"tigera-operator-7dcd859c48-mlbcn\" (UID: \"8e5b9088-c572-4869-b570-6a91c2cc66b3\") " pod="tigera-operator/tigera-operator-7dcd859c48-mlbcn" Nov 5 23:44:13.489506 containerd[1978]: time="2025-11-05T23:44:13.488974172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-245zw,Uid:0b0f11be-364d-4d99-823a-019db2111dfd,Namespace:kube-system,Attempt:0,}" Nov 5 23:44:13.540364 containerd[1978]: time="2025-11-05T23:44:13.539849624Z" level=info msg="connecting to shim 8c134cd91522acce1889760c80d6714eca566c807fd9a72782c3009d61c69c38" address="unix:///run/containerd/s/8caa7450b0317b0d7dd9062c8e094c864caa21c39d4fe333236af8b181407935" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:44:13.598894 systemd[1]: Started cri-containerd-8c134cd91522acce1889760c80d6714eca566c807fd9a72782c3009d61c69c38.scope - libcontainer container 8c134cd91522acce1889760c80d6714eca566c807fd9a72782c3009d61c69c38. Nov 5 23:44:13.658378 containerd[1978]: time="2025-11-05T23:44:13.658305441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-245zw,Uid:0b0f11be-364d-4d99-823a-019db2111dfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c134cd91522acce1889760c80d6714eca566c807fd9a72782c3009d61c69c38\"" Nov 5 23:44:13.672635 containerd[1978]: time="2025-11-05T23:44:13.671102241Z" level=info msg="CreateContainer within sandbox \"8c134cd91522acce1889760c80d6714eca566c807fd9a72782c3009d61c69c38\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 23:44:13.693773 containerd[1978]: time="2025-11-05T23:44:13.693716409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mlbcn,Uid:8e5b9088-c572-4869-b570-6a91c2cc66b3,Namespace:tigera-operator,Attempt:0,}" Nov 5 23:44:13.697655 containerd[1978]: time="2025-11-05T23:44:13.693951309Z" level=info msg="Container 5d7293ce92e6dc6ba4ce1cd1df657e9d8e8add0a337885989af5c653d51dc96d: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:13.723323 containerd[1978]: time="2025-11-05T23:44:13.723245925Z" level=info msg="CreateContainer within sandbox \"8c134cd91522acce1889760c80d6714eca566c807fd9a72782c3009d61c69c38\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d7293ce92e6dc6ba4ce1cd1df657e9d8e8add0a337885989af5c653d51dc96d\"" Nov 5 23:44:13.726545 containerd[1978]: time="2025-11-05T23:44:13.724857897Z" level=info msg="StartContainer for \"5d7293ce92e6dc6ba4ce1cd1df657e9d8e8add0a337885989af5c653d51dc96d\"" Nov 5 23:44:13.730982 containerd[1978]: time="2025-11-05T23:44:13.730868349Z" level=info msg="connecting to shim 5d7293ce92e6dc6ba4ce1cd1df657e9d8e8add0a337885989af5c653d51dc96d" address="unix:///run/containerd/s/8caa7450b0317b0d7dd9062c8e094c864caa21c39d4fe333236af8b181407935" protocol=ttrpc version=3 Nov 5 23:44:13.758752 containerd[1978]: time="2025-11-05T23:44:13.758186877Z" level=info msg="connecting to shim fb10f2b5243e1e68f5f155a31121c1a5e07a173241da44f38148418517afac70" address="unix:///run/containerd/s/049c36ff97d11ef39bba8df1e098a4335da0e2a9597e20d440dd0ad13a01e9fc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:44:13.779062 systemd[1]: Started cri-containerd-5d7293ce92e6dc6ba4ce1cd1df657e9d8e8add0a337885989af5c653d51dc96d.scope - libcontainer container 5d7293ce92e6dc6ba4ce1cd1df657e9d8e8add0a337885989af5c653d51dc96d. Nov 5 23:44:13.829915 systemd[1]: Started cri-containerd-fb10f2b5243e1e68f5f155a31121c1a5e07a173241da44f38148418517afac70.scope - libcontainer container fb10f2b5243e1e68f5f155a31121c1a5e07a173241da44f38148418517afac70. Nov 5 23:44:13.961639 containerd[1978]: time="2025-11-05T23:44:13.961461970Z" level=info msg="StartContainer for \"5d7293ce92e6dc6ba4ce1cd1df657e9d8e8add0a337885989af5c653d51dc96d\" returns successfully" Nov 5 23:44:13.965817 containerd[1978]: time="2025-11-05T23:44:13.965737678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mlbcn,Uid:8e5b9088-c572-4869-b570-6a91c2cc66b3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fb10f2b5243e1e68f5f155a31121c1a5e07a173241da44f38148418517afac70\"" Nov 5 23:44:13.973880 containerd[1978]: time="2025-11-05T23:44:13.973829350Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 23:44:15.422232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880979846.mount: Deactivated successfully. Nov 5 23:44:16.359639 containerd[1978]: time="2025-11-05T23:44:16.359016226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:16.361077 containerd[1978]: time="2025-11-05T23:44:16.361031638Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 23:44:16.362527 containerd[1978]: time="2025-11-05T23:44:16.362442082Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:16.368649 containerd[1978]: time="2025-11-05T23:44:16.367953406Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:16.370858 containerd[1978]: time="2025-11-05T23:44:16.370671850Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.396412372s" Nov 5 23:44:16.370858 containerd[1978]: time="2025-11-05T23:44:16.370735558Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 23:44:16.381041 containerd[1978]: time="2025-11-05T23:44:16.380911654Z" level=info msg="CreateContainer within sandbox \"fb10f2b5243e1e68f5f155a31121c1a5e07a173241da44f38148418517afac70\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 23:44:16.399644 containerd[1978]: time="2025-11-05T23:44:16.397825306Z" level=info msg="Container 16e860a763b6357ecbea6010d8db8f2271150bf08b466c6a6121c8dbcdd04e29: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:16.406912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253180794.mount: Deactivated successfully. Nov 5 23:44:16.421635 containerd[1978]: time="2025-11-05T23:44:16.421559351Z" level=info msg="CreateContainer within sandbox \"fb10f2b5243e1e68f5f155a31121c1a5e07a173241da44f38148418517afac70\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"16e860a763b6357ecbea6010d8db8f2271150bf08b466c6a6121c8dbcdd04e29\"" Nov 5 23:44:16.422811 containerd[1978]: time="2025-11-05T23:44:16.422765567Z" level=info msg="StartContainer for \"16e860a763b6357ecbea6010d8db8f2271150bf08b466c6a6121c8dbcdd04e29\"" Nov 5 23:44:16.424848 containerd[1978]: time="2025-11-05T23:44:16.424745735Z" level=info msg="connecting to shim 16e860a763b6357ecbea6010d8db8f2271150bf08b466c6a6121c8dbcdd04e29" address="unix:///run/containerd/s/049c36ff97d11ef39bba8df1e098a4335da0e2a9597e20d440dd0ad13a01e9fc" protocol=ttrpc version=3 Nov 5 23:44:16.467888 systemd[1]: Started cri-containerd-16e860a763b6357ecbea6010d8db8f2271150bf08b466c6a6121c8dbcdd04e29.scope - libcontainer container 16e860a763b6357ecbea6010d8db8f2271150bf08b466c6a6121c8dbcdd04e29. Nov 5 23:44:16.529783 containerd[1978]: time="2025-11-05T23:44:16.529541675Z" level=info msg="StartContainer for \"16e860a763b6357ecbea6010d8db8f2271150bf08b466c6a6121c8dbcdd04e29\" returns successfully" Nov 5 23:44:16.960648 kubelet[3516]: I1105 23:44:16.959692 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-245zw" podStartSLOduration=3.959666473 podStartE2EDuration="3.959666473s" podCreationTimestamp="2025-11-05 23:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:44:14.953721791 +0000 UTC m=+6.500270469" watchObservedRunningTime="2025-11-05 23:44:16.959666473 +0000 UTC m=+8.506215163" Nov 5 23:44:23.595408 sudo[2360]: pam_unix(sudo:session): session closed for user root Nov 5 23:44:23.619870 sshd[2359]: Connection closed by 147.75.109.163 port 59876 Nov 5 23:44:23.620972 sshd-session[2356]: pam_unix(sshd:session): session closed for user core Nov 5 23:44:23.628556 systemd[1]: sshd@6-172.31.26.188:22-147.75.109.163:59876.service: Deactivated successfully. Nov 5 23:44:23.629197 systemd-logind[1881]: Session 7 logged out. Waiting for processes to exit. Nov 5 23:44:23.634646 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 23:44:23.635780 systemd[1]: session-7.scope: Consumed 13.266s CPU time, 224.3M memory peak. Nov 5 23:44:23.643083 systemd-logind[1881]: Removed session 7. Nov 5 23:44:37.512656 kubelet[3516]: I1105 23:44:37.512391 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mlbcn" podStartSLOduration=22.109833423 podStartE2EDuration="24.512369539s" podCreationTimestamp="2025-11-05 23:44:13 +0000 UTC" firstStartedPulling="2025-11-05 23:44:13.970892806 +0000 UTC m=+5.517441472" lastFinishedPulling="2025-11-05 23:44:16.373428922 +0000 UTC m=+7.919977588" observedRunningTime="2025-11-05 23:44:16.961220485 +0000 UTC m=+8.507769343" watchObservedRunningTime="2025-11-05 23:44:37.512369539 +0000 UTC m=+29.058918241" Nov 5 23:44:37.535525 systemd[1]: Created slice kubepods-besteffort-pod8c5e7a8b_7510_426a_afc1_0c8bf0179695.slice - libcontainer container kubepods-besteffort-pod8c5e7a8b_7510_426a_afc1_0c8bf0179695.slice. Nov 5 23:44:37.579204 kubelet[3516]: I1105 23:44:37.579153 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c5e7a8b-7510-426a-afc1-0c8bf0179695-typha-certs\") pod \"calico-typha-d78957cbc-vzlfb\" (UID: \"8c5e7a8b-7510-426a-afc1-0c8bf0179695\") " pod="calico-system/calico-typha-d78957cbc-vzlfb" Nov 5 23:44:37.579562 kubelet[3516]: I1105 23:44:37.579517 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c5e7a8b-7510-426a-afc1-0c8bf0179695-tigera-ca-bundle\") pod \"calico-typha-d78957cbc-vzlfb\" (UID: \"8c5e7a8b-7510-426a-afc1-0c8bf0179695\") " pod="calico-system/calico-typha-d78957cbc-vzlfb" Nov 5 23:44:37.579850 kubelet[3516]: I1105 23:44:37.579823 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpsmz\" (UniqueName: \"kubernetes.io/projected/8c5e7a8b-7510-426a-afc1-0c8bf0179695-kube-api-access-bpsmz\") pod \"calico-typha-d78957cbc-vzlfb\" (UID: \"8c5e7a8b-7510-426a-afc1-0c8bf0179695\") " pod="calico-system/calico-typha-d78957cbc-vzlfb" Nov 5 23:44:37.729576 systemd[1]: Created slice kubepods-besteffort-pod19c96451_f5b2_4383_bf37_3383c5ef85af.slice - libcontainer container kubepods-besteffort-pod19c96451_f5b2_4383_bf37_3383c5ef85af.slice. Nov 5 23:44:37.781929 kubelet[3516]: I1105 23:44:37.781764 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-flexvol-driver-host\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.783933 kubelet[3516]: I1105 23:44:37.782732 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19c96451-f5b2-4383-bf37-3383c5ef85af-node-certs\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.784917 kubelet[3516]: I1105 23:44:37.784835 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-cni-log-dir\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.785443 kubelet[3516]: I1105 23:44:37.785256 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19c96451-f5b2-4383-bf37-3383c5ef85af-tigera-ca-bundle\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.786726 kubelet[3516]: I1105 23:44:37.786656 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-xtables-lock\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.787106 kubelet[3516]: I1105 23:44:37.786923 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgzq7\" (UniqueName: \"kubernetes.io/projected/19c96451-f5b2-4383-bf37-3383c5ef85af-kube-api-access-mgzq7\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.787732 kubelet[3516]: I1105 23:44:37.787582 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-var-lib-calico\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.788616 kubelet[3516]: I1105 23:44:37.788161 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-var-run-calico\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.790615 kubelet[3516]: I1105 23:44:37.789045 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-cni-net-dir\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.791075 kubelet[3516]: I1105 23:44:37.790921 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-lib-modules\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.791614 kubelet[3516]: I1105 23:44:37.791355 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-policysync\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.791614 kubelet[3516]: I1105 23:44:37.791461 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19c96451-f5b2-4383-bf37-3383c5ef85af-cni-bin-dir\") pod \"calico-node-927ws\" (UID: \"19c96451-f5b2-4383-bf37-3383c5ef85af\") " pod="calico-system/calico-node-927ws" Nov 5 23:44:37.845910 containerd[1978]: time="2025-11-05T23:44:37.845449521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d78957cbc-vzlfb,Uid:8c5e7a8b-7510-426a-afc1-0c8bf0179695,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:37.903281 kubelet[3516]: E1105 23:44:37.903205 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.906800 kubelet[3516]: W1105 23:44:37.906751 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.907188 kubelet[3516]: E1105 23:44:37.907056 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.909845 kubelet[3516]: E1105 23:44:37.909283 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.909845 kubelet[3516]: W1105 23:44:37.909401 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.909845 kubelet[3516]: E1105 23:44:37.909457 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.912789 kubelet[3516]: E1105 23:44:37.912749 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.913001 kubelet[3516]: W1105 23:44:37.912973 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.918348 kubelet[3516]: E1105 23:44:37.917851 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.927675 kubelet[3516]: E1105 23:44:37.927638 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.928090 kubelet[3516]: W1105 23:44:37.927906 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.928370 kubelet[3516]: E1105 23:44:37.928206 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.933524 kubelet[3516]: E1105 23:44:37.933450 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.934095 kubelet[3516]: W1105 23:44:37.933488 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.934451 kubelet[3516]: E1105 23:44:37.934376 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.936265 kubelet[3516]: E1105 23:44:37.936184 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.936644 kubelet[3516]: W1105 23:44:37.936231 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.936644 kubelet[3516]: E1105 23:44:37.936570 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.938847 kubelet[3516]: E1105 23:44:37.938812 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.939233 kubelet[3516]: W1105 23:44:37.938992 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.939233 kubelet[3516]: E1105 23:44:37.939031 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.940857 kubelet[3516]: E1105 23:44:37.940812 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.941288 kubelet[3516]: W1105 23:44:37.941014 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.941288 kubelet[3516]: E1105 23:44:37.941058 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.942062 kubelet[3516]: E1105 23:44:37.942026 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.942948 kubelet[3516]: W1105 23:44:37.942215 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.942948 kubelet[3516]: E1105 23:44:37.942258 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.943528 containerd[1978]: time="2025-11-05T23:44:37.943194130Z" level=info msg="connecting to shim 4748073c55d5e365752ac574620f2d31646099b65839daddefee8b4f96a897c8" address="unix:///run/containerd/s/126594706601ed2b50cba1491d73fe970b2f2aea33d98a504254e841fb63dffb" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:44:37.948036 kubelet[3516]: E1105 23:44:37.947997 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.948358 kubelet[3516]: W1105 23:44:37.948308 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.948800 kubelet[3516]: E1105 23:44:37.948771 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.950919 kubelet[3516]: E1105 23:44:37.950840 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.951394 kubelet[3516]: W1105 23:44:37.951233 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.951650 kubelet[3516]: E1105 23:44:37.951502 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.953492 kubelet[3516]: E1105 23:44:37.953402 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.953916 kubelet[3516]: W1105 23:44:37.953754 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.954555 kubelet[3516]: E1105 23:44:37.954479 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.955873 kubelet[3516]: E1105 23:44:37.955837 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.956390 kubelet[3516]: W1105 23:44:37.956108 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.956390 kubelet[3516]: E1105 23:44:37.956161 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.964161 kubelet[3516]: E1105 23:44:37.963998 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.964509 kubelet[3516]: W1105 23:44:37.964406 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.965025 kubelet[3516]: E1105 23:44:37.964791 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.967674 kubelet[3516]: E1105 23:44:37.967620 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.969237 kubelet[3516]: W1105 23:44:37.969050 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.971821 kubelet[3516]: E1105 23:44:37.971335 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.972730 kubelet[3516]: E1105 23:44:37.972577 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.975788 kubelet[3516]: W1105 23:44:37.973526 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.976255 kubelet[3516]: E1105 23:44:37.976208 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.981805 kubelet[3516]: E1105 23:44:37.981734 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.982163 kubelet[3516]: W1105 23:44:37.981777 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.982163 kubelet[3516]: E1105 23:44:37.982123 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.985352 kubelet[3516]: E1105 23:44:37.985269 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:37.985352 kubelet[3516]: W1105 23:44:37.985301 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:37.985872 kubelet[3516]: E1105 23:44:37.985796 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:37.992884 kubelet[3516]: E1105 23:44:37.992728 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:44:38.045527 kubelet[3516]: E1105 23:44:38.045444 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.045922 kubelet[3516]: W1105 23:44:38.045494 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.045922 kubelet[3516]: E1105 23:44:38.045755 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.063909 systemd[1]: Started cri-containerd-4748073c55d5e365752ac574620f2d31646099b65839daddefee8b4f96a897c8.scope - libcontainer container 4748073c55d5e365752ac574620f2d31646099b65839daddefee8b4f96a897c8. Nov 5 23:44:38.070534 kubelet[3516]: E1105 23:44:38.070485 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.071414 kubelet[3516]: W1105 23:44:38.071351 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.072283 kubelet[3516]: E1105 23:44:38.072179 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.076039 kubelet[3516]: E1105 23:44:38.075950 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.076817 kubelet[3516]: W1105 23:44:38.075990 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.077852 kubelet[3516]: E1105 23:44:38.077046 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.082350 kubelet[3516]: E1105 23:44:38.082274 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.082952 kubelet[3516]: W1105 23:44:38.082696 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.082952 kubelet[3516]: E1105 23:44:38.082738 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.084944 kubelet[3516]: E1105 23:44:38.084888 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.085167 kubelet[3516]: W1105 23:44:38.085084 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.085167 kubelet[3516]: E1105 23:44:38.085124 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.086570 kubelet[3516]: E1105 23:44:38.086152 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.086570 kubelet[3516]: W1105 23:44:38.086184 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.086570 kubelet[3516]: E1105 23:44:38.086214 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.087643 kubelet[3516]: E1105 23:44:38.087519 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.087643 kubelet[3516]: W1105 23:44:38.087552 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.088019 kubelet[3516]: E1105 23:44:38.087777 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.089676 kubelet[3516]: E1105 23:44:38.089458 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.089676 kubelet[3516]: W1105 23:44:38.089497 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.089676 kubelet[3516]: E1105 23:44:38.089529 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.091336 kubelet[3516]: E1105 23:44:38.091124 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.091336 kubelet[3516]: W1105 23:44:38.091162 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.091336 kubelet[3516]: E1105 23:44:38.091195 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.093790 kubelet[3516]: E1105 23:44:38.093737 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.094102 kubelet[3516]: W1105 23:44:38.093965 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.094102 kubelet[3516]: E1105 23:44:38.094009 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.094765 kubelet[3516]: E1105 23:44:38.094623 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.094765 kubelet[3516]: W1105 23:44:38.094652 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.094765 kubelet[3516]: E1105 23:44:38.094677 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.096832 kubelet[3516]: E1105 23:44:38.095327 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.096832 kubelet[3516]: W1105 23:44:38.096666 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.096832 kubelet[3516]: E1105 23:44:38.096714 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.097538 kubelet[3516]: E1105 23:44:38.097433 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.097538 kubelet[3516]: W1105 23:44:38.097466 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.097538 kubelet[3516]: E1105 23:44:38.097494 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.098792 kubelet[3516]: E1105 23:44:38.098642 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.098792 kubelet[3516]: W1105 23:44:38.098678 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.098792 kubelet[3516]: E1105 23:44:38.098708 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.099718 kubelet[3516]: E1105 23:44:38.099480 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.099718 kubelet[3516]: W1105 23:44:38.099510 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.099718 kubelet[3516]: E1105 23:44:38.099542 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.102655 kubelet[3516]: E1105 23:44:38.101971 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.103110 kubelet[3516]: W1105 23:44:38.102810 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.103110 kubelet[3516]: E1105 23:44:38.102867 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.104565 kubelet[3516]: E1105 23:44:38.103934 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.104565 kubelet[3516]: W1105 23:44:38.103964 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.104565 kubelet[3516]: E1105 23:44:38.103996 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.107773 kubelet[3516]: E1105 23:44:38.107459 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.107773 kubelet[3516]: W1105 23:44:38.107488 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.107773 kubelet[3516]: E1105 23:44:38.107519 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.111955 kubelet[3516]: E1105 23:44:38.111308 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.111955 kubelet[3516]: W1105 23:44:38.111559 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.111955 kubelet[3516]: E1105 23:44:38.111616 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.113334 kubelet[3516]: E1105 23:44:38.113289 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.113926 kubelet[3516]: W1105 23:44:38.113518 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.113926 kubelet[3516]: E1105 23:44:38.113562 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.114925 kubelet[3516]: E1105 23:44:38.114896 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.115118 kubelet[3516]: W1105 23:44:38.115038 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.115118 kubelet[3516]: E1105 23:44:38.115077 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.116272 kubelet[3516]: E1105 23:44:38.116223 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.116731 kubelet[3516]: W1105 23:44:38.116571 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.116927 kubelet[3516]: E1105 23:44:38.116829 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.117203 kubelet[3516]: I1105 23:44:38.117141 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89a32bf2-ec2a-4f35-b294-b2467c662fb4-registration-dir\") pod \"csi-node-driver-7km8q\" (UID: \"89a32bf2-ec2a-4f35-b294-b2467c662fb4\") " pod="calico-system/csi-node-driver-7km8q" Nov 5 23:44:38.118104 kubelet[3516]: E1105 23:44:38.118037 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.118396 kubelet[3516]: W1105 23:44:38.118181 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.118396 kubelet[3516]: E1105 23:44:38.118215 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.119401 kubelet[3516]: E1105 23:44:38.119278 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.119401 kubelet[3516]: W1105 23:44:38.119342 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.119811 kubelet[3516]: E1105 23:44:38.119657 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.120779 kubelet[3516]: E1105 23:44:38.120652 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.120779 kubelet[3516]: W1105 23:44:38.120686 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.120779 kubelet[3516]: E1105 23:44:38.120717 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.121392 kubelet[3516]: I1105 23:44:38.121052 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89a32bf2-ec2a-4f35-b294-b2467c662fb4-socket-dir\") pod \"csi-node-driver-7km8q\" (UID: \"89a32bf2-ec2a-4f35-b294-b2467c662fb4\") " pod="calico-system/csi-node-driver-7km8q" Nov 5 23:44:38.123013 kubelet[3516]: E1105 23:44:38.122952 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.123514 kubelet[3516]: W1105 23:44:38.123107 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.123514 kubelet[3516]: E1105 23:44:38.123143 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.124258 kubelet[3516]: I1105 23:44:38.124091 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/89a32bf2-ec2a-4f35-b294-b2467c662fb4-varrun\") pod \"csi-node-driver-7km8q\" (UID: \"89a32bf2-ec2a-4f35-b294-b2467c662fb4\") " pod="calico-system/csi-node-driver-7km8q" Nov 5 23:44:38.124787 kubelet[3516]: E1105 23:44:38.124746 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.125244 kubelet[3516]: W1105 23:44:38.124958 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.125244 kubelet[3516]: E1105 23:44:38.125001 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.126154 kubelet[3516]: E1105 23:44:38.126011 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.126154 kubelet[3516]: W1105 23:44:38.126042 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.126154 kubelet[3516]: E1105 23:44:38.126073 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.127612 kubelet[3516]: E1105 23:44:38.127479 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.127612 kubelet[3516]: W1105 23:44:38.127513 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.127612 kubelet[3516]: E1105 23:44:38.127547 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.128564 kubelet[3516]: I1105 23:44:38.128187 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfk2\" (UniqueName: \"kubernetes.io/projected/89a32bf2-ec2a-4f35-b294-b2467c662fb4-kube-api-access-9gfk2\") pod \"csi-node-driver-7km8q\" (UID: \"89a32bf2-ec2a-4f35-b294-b2467c662fb4\") " pod="calico-system/csi-node-driver-7km8q" Nov 5 23:44:38.129222 kubelet[3516]: E1105 23:44:38.129191 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.129533 kubelet[3516]: W1105 23:44:38.129348 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.129892 kubelet[3516]: E1105 23:44:38.129388 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.130490 kubelet[3516]: E1105 23:44:38.130446 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.131027 kubelet[3516]: W1105 23:44:38.130869 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.131027 kubelet[3516]: E1105 23:44:38.130918 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.132165 kubelet[3516]: E1105 23:44:38.132068 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.132165 kubelet[3516]: W1105 23:44:38.132100 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.132165 kubelet[3516]: E1105 23:44:38.132131 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.132606 kubelet[3516]: I1105 23:44:38.132526 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89a32bf2-ec2a-4f35-b294-b2467c662fb4-kubelet-dir\") pod \"csi-node-driver-7km8q\" (UID: \"89a32bf2-ec2a-4f35-b294-b2467c662fb4\") " pod="calico-system/csi-node-driver-7km8q" Nov 5 23:44:38.133901 kubelet[3516]: E1105 23:44:38.133799 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.133901 kubelet[3516]: W1105 23:44:38.133834 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.133901 kubelet[3516]: E1105 23:44:38.133865 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.135676 kubelet[3516]: E1105 23:44:38.135639 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.135939 kubelet[3516]: W1105 23:44:38.135854 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.135939 kubelet[3516]: E1105 23:44:38.135895 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.138051 kubelet[3516]: E1105 23:44:38.137873 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.138051 kubelet[3516]: W1105 23:44:38.137914 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.138051 kubelet[3516]: E1105 23:44:38.137945 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.138710 kubelet[3516]: E1105 23:44:38.138678 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.138945 kubelet[3516]: W1105 23:44:38.138860 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.138945 kubelet[3516]: E1105 23:44:38.138899 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.235337 kubelet[3516]: E1105 23:44:38.235146 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.236708 kubelet[3516]: W1105 23:44:38.235678 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.236708 kubelet[3516]: E1105 23:44:38.236554 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.239375 kubelet[3516]: E1105 23:44:38.239338 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.241655 kubelet[3516]: W1105 23:44:38.239649 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.241655 kubelet[3516]: E1105 23:44:38.239776 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.242931 kubelet[3516]: E1105 23:44:38.242787 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.242931 kubelet[3516]: W1105 23:44:38.242852 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.242931 kubelet[3516]: E1105 23:44:38.242889 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.244254 kubelet[3516]: E1105 23:44:38.244087 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.245696 kubelet[3516]: W1105 23:44:38.244700 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.246034 kubelet[3516]: E1105 23:44:38.245856 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.248882 kubelet[3516]: E1105 23:44:38.248841 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.250677 kubelet[3516]: W1105 23:44:38.249045 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.250677 kubelet[3516]: E1105 23:44:38.249358 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.251867 kubelet[3516]: E1105 23:44:38.251756 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.251867 kubelet[3516]: W1105 23:44:38.251793 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.251867 kubelet[3516]: E1105 23:44:38.251832 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.254052 kubelet[3516]: E1105 23:44:38.253021 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.254052 kubelet[3516]: W1105 23:44:38.253061 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.254052 kubelet[3516]: E1105 23:44:38.253092 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.255201 kubelet[3516]: E1105 23:44:38.255095 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.255201 kubelet[3516]: W1105 23:44:38.255133 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.255201 kubelet[3516]: E1105 23:44:38.255166 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.255450 containerd[1978]: time="2025-11-05T23:44:38.255192019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d78957cbc-vzlfb,Uid:8c5e7a8b-7510-426a-afc1-0c8bf0179695,Namespace:calico-system,Attempt:0,} returns sandbox id \"4748073c55d5e365752ac574620f2d31646099b65839daddefee8b4f96a897c8\"" Nov 5 23:44:38.257947 kubelet[3516]: E1105 23:44:38.257912 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.258630 kubelet[3516]: W1105 23:44:38.258548 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.259175 kubelet[3516]: E1105 23:44:38.258792 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.259790 kubelet[3516]: E1105 23:44:38.259753 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.262626 containerd[1978]: time="2025-11-05T23:44:38.261381595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 23:44:38.262900 kubelet[3516]: W1105 23:44:38.262856 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.263070 kubelet[3516]: E1105 23:44:38.263041 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.264521 kubelet[3516]: E1105 23:44:38.264439 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.265047 kubelet[3516]: W1105 23:44:38.265003 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.265699 kubelet[3516]: E1105 23:44:38.265657 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.269156 kubelet[3516]: E1105 23:44:38.269109 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.271043 kubelet[3516]: W1105 23:44:38.270662 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.271043 kubelet[3516]: E1105 23:44:38.270719 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.271986 kubelet[3516]: E1105 23:44:38.271947 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.272199 kubelet[3516]: W1105 23:44:38.272139 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.272529 kubelet[3516]: E1105 23:44:38.272400 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.275624 kubelet[3516]: E1105 23:44:38.274508 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.275940 kubelet[3516]: W1105 23:44:38.275855 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.275940 kubelet[3516]: E1105 23:44:38.275909 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.277254 kubelet[3516]: E1105 23:44:38.277054 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.277254 kubelet[3516]: W1105 23:44:38.277094 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.277254 kubelet[3516]: E1105 23:44:38.277218 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.279489 kubelet[3516]: E1105 23:44:38.279448 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.279883 kubelet[3516]: W1105 23:44:38.279840 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.280042 kubelet[3516]: E1105 23:44:38.280016 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.280759 kubelet[3516]: E1105 23:44:38.280714 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.281073 kubelet[3516]: W1105 23:44:38.280911 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.281073 kubelet[3516]: E1105 23:44:38.280956 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.282206 kubelet[3516]: E1105 23:44:38.281876 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.282206 kubelet[3516]: W1105 23:44:38.281907 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.282552 kubelet[3516]: E1105 23:44:38.281938 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.283311 kubelet[3516]: E1105 23:44:38.283099 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.283311 kubelet[3516]: W1105 23:44:38.283131 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.283311 kubelet[3516]: E1105 23:44:38.283162 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.284044 kubelet[3516]: E1105 23:44:38.284005 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.284339 kubelet[3516]: W1105 23:44:38.284195 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.284339 kubelet[3516]: E1105 23:44:38.284234 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.284956 kubelet[3516]: E1105 23:44:38.284919 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.285236 kubelet[3516]: W1105 23:44:38.285087 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.285236 kubelet[3516]: E1105 23:44:38.285126 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.285937 kubelet[3516]: E1105 23:44:38.285842 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.285937 kubelet[3516]: W1105 23:44:38.285875 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.285937 kubelet[3516]: E1105 23:44:38.285904 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.286695 kubelet[3516]: E1105 23:44:38.286659 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.286974 kubelet[3516]: W1105 23:44:38.286826 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.286974 kubelet[3516]: E1105 23:44:38.286868 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.287544 kubelet[3516]: E1105 23:44:38.287509 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.288274 kubelet[3516]: W1105 23:44:38.287725 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.288274 kubelet[3516]: E1105 23:44:38.287765 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.289109 kubelet[3516]: E1105 23:44:38.289074 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.289388 kubelet[3516]: W1105 23:44:38.289359 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.289656 kubelet[3516]: E1105 23:44:38.289548 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.317714 kubelet[3516]: E1105 23:44:38.316660 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:38.317714 kubelet[3516]: W1105 23:44:38.316799 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:38.317714 kubelet[3516]: E1105 23:44:38.316834 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:38.338572 containerd[1978]: time="2025-11-05T23:44:38.337997431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-927ws,Uid:19c96451-f5b2-4383-bf37-3383c5ef85af,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:38.390621 containerd[1978]: time="2025-11-05T23:44:38.390513584Z" level=info msg="connecting to shim fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924" address="unix:///run/containerd/s/2365d078e3effa4f62d2806b766bc49e83acb0ebf114c1ae0cb9c065286589c3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:44:38.451158 systemd[1]: Started cri-containerd-fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924.scope - libcontainer container fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924. Nov 5 23:44:38.511913 containerd[1978]: time="2025-11-05T23:44:38.511839344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-927ws,Uid:19c96451-f5b2-4383-bf37-3383c5ef85af,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924\"" Nov 5 23:44:39.446531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018613167.mount: Deactivated successfully. Nov 5 23:44:39.832510 kubelet[3516]: E1105 23:44:39.832174 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:44:40.287388 containerd[1978]: time="2025-11-05T23:44:40.287223417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:40.289913 containerd[1978]: time="2025-11-05T23:44:40.289837065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 23:44:40.291705 containerd[1978]: time="2025-11-05T23:44:40.291644073Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:40.297424 containerd[1978]: time="2025-11-05T23:44:40.297330057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:40.300087 containerd[1978]: time="2025-11-05T23:44:40.300020097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.03855359s" Nov 5 23:44:40.300087 containerd[1978]: time="2025-11-05T23:44:40.300081525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 23:44:40.302187 containerd[1978]: time="2025-11-05T23:44:40.302104725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 23:44:40.332312 containerd[1978]: time="2025-11-05T23:44:40.332249181Z" level=info msg="CreateContainer within sandbox \"4748073c55d5e365752ac574620f2d31646099b65839daddefee8b4f96a897c8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 23:44:40.349194 containerd[1978]: time="2025-11-05T23:44:40.349091865Z" level=info msg="Container 2ddcd0718bab42f6b90b47518a03d86c5593a872a2c250051d7eff0d3e10a09b: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:40.373370 containerd[1978]: time="2025-11-05T23:44:40.373193830Z" level=info msg="CreateContainer within sandbox \"4748073c55d5e365752ac574620f2d31646099b65839daddefee8b4f96a897c8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ddcd0718bab42f6b90b47518a03d86c5593a872a2c250051d7eff0d3e10a09b\"" Nov 5 23:44:40.374277 containerd[1978]: time="2025-11-05T23:44:40.374143474Z" level=info msg="StartContainer for \"2ddcd0718bab42f6b90b47518a03d86c5593a872a2c250051d7eff0d3e10a09b\"" Nov 5 23:44:40.376696 containerd[1978]: time="2025-11-05T23:44:40.376623874Z" level=info msg="connecting to shim 2ddcd0718bab42f6b90b47518a03d86c5593a872a2c250051d7eff0d3e10a09b" address="unix:///run/containerd/s/126594706601ed2b50cba1491d73fe970b2f2aea33d98a504254e841fb63dffb" protocol=ttrpc version=3 Nov 5 23:44:40.416927 systemd[1]: Started cri-containerd-2ddcd0718bab42f6b90b47518a03d86c5593a872a2c250051d7eff0d3e10a09b.scope - libcontainer container 2ddcd0718bab42f6b90b47518a03d86c5593a872a2c250051d7eff0d3e10a09b. Nov 5 23:44:40.506478 containerd[1978]: time="2025-11-05T23:44:40.506430898Z" level=info msg="StartContainer for \"2ddcd0718bab42f6b90b47518a03d86c5593a872a2c250051d7eff0d3e10a09b\" returns successfully" Nov 5 23:44:41.142279 kubelet[3516]: E1105 23:44:41.142106 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.142279 kubelet[3516]: W1105 23:44:41.142145 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.142279 kubelet[3516]: E1105 23:44:41.142178 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.143674 kubelet[3516]: E1105 23:44:41.143470 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.143674 kubelet[3516]: W1105 23:44:41.143504 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.144802 kubelet[3516]: E1105 23:44:41.143915 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.145246 kubelet[3516]: E1105 23:44:41.145066 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.145246 kubelet[3516]: W1105 23:44:41.145100 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.145246 kubelet[3516]: E1105 23:44:41.145133 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.150255 kubelet[3516]: E1105 23:44:41.149265 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.150255 kubelet[3516]: W1105 23:44:41.149299 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.150255 kubelet[3516]: E1105 23:44:41.149332 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.153687 kubelet[3516]: E1105 23:44:41.152894 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.153974 kubelet[3516]: W1105 23:44:41.153931 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.154195 kubelet[3516]: E1105 23:44:41.154163 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.155334 kubelet[3516]: E1105 23:44:41.154921 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.156144 kubelet[3516]: W1105 23:44:41.155847 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.156144 kubelet[3516]: E1105 23:44:41.155916 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.158226 kubelet[3516]: E1105 23:44:41.158188 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.159627 kubelet[3516]: W1105 23:44:41.158419 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.159627 kubelet[3516]: E1105 23:44:41.158648 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.160652 kubelet[3516]: E1105 23:44:41.160379 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.160652 kubelet[3516]: W1105 23:44:41.160410 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.160652 kubelet[3516]: E1105 23:44:41.160441 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.162247 kubelet[3516]: E1105 23:44:41.162087 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.162247 kubelet[3516]: W1105 23:44:41.162121 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.162247 kubelet[3516]: E1105 23:44:41.162152 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.163688 kubelet[3516]: E1105 23:44:41.163413 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.163688 kubelet[3516]: W1105 23:44:41.163523 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.163688 kubelet[3516]: E1105 23:44:41.163557 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.165162 kubelet[3516]: E1105 23:44:41.165113 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.165162 kubelet[3516]: W1105 23:44:41.165151 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.165370 kubelet[3516]: E1105 23:44:41.165183 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.165579 kubelet[3516]: E1105 23:44:41.165544 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.165579 kubelet[3516]: W1105 23:44:41.165572 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.165767 kubelet[3516]: E1105 23:44:41.165626 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.169034 kubelet[3516]: E1105 23:44:41.168977 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.169034 kubelet[3516]: W1105 23:44:41.169017 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.169348 kubelet[3516]: E1105 23:44:41.169050 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.169819 kubelet[3516]: E1105 23:44:41.169649 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.169819 kubelet[3516]: W1105 23:44:41.169681 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.169819 kubelet[3516]: E1105 23:44:41.169711 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.170529 kubelet[3516]: E1105 23:44:41.170472 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.170654 kubelet[3516]: W1105 23:44:41.170625 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.170739 kubelet[3516]: E1105 23:44:41.170657 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.189854 kubelet[3516]: E1105 23:44:41.189794 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.189854 kubelet[3516]: W1105 23:44:41.189838 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.190172 kubelet[3516]: E1105 23:44:41.189873 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.190373 kubelet[3516]: E1105 23:44:41.190335 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.190373 kubelet[3516]: W1105 23:44:41.190365 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.190492 kubelet[3516]: E1105 23:44:41.190389 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.190846 kubelet[3516]: E1105 23:44:41.190812 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.190846 kubelet[3516]: W1105 23:44:41.190838 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.191058 kubelet[3516]: E1105 23:44:41.190861 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.193112 kubelet[3516]: E1105 23:44:41.193057 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.193112 kubelet[3516]: W1105 23:44:41.193098 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.193316 kubelet[3516]: E1105 23:44:41.193131 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.194534 kubelet[3516]: E1105 23:44:41.194483 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.194534 kubelet[3516]: W1105 23:44:41.194522 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.194824 kubelet[3516]: E1105 23:44:41.194555 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.195828 kubelet[3516]: E1105 23:44:41.195742 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.195828 kubelet[3516]: W1105 23:44:41.195819 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.196047 kubelet[3516]: E1105 23:44:41.195851 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.197019 kubelet[3516]: E1105 23:44:41.196961 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.197019 kubelet[3516]: W1105 23:44:41.197003 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.197238 kubelet[3516]: E1105 23:44:41.197036 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.198115 kubelet[3516]: E1105 23:44:41.197691 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.198115 kubelet[3516]: W1105 23:44:41.197728 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.198115 kubelet[3516]: E1105 23:44:41.197757 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.199110 kubelet[3516]: E1105 23:44:41.199063 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.199110 kubelet[3516]: W1105 23:44:41.199101 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.199298 kubelet[3516]: E1105 23:44:41.199133 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.199727 kubelet[3516]: E1105 23:44:41.199491 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.199727 kubelet[3516]: W1105 23:44:41.199518 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.199727 kubelet[3516]: E1105 23:44:41.199541 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.201856 kubelet[3516]: E1105 23:44:41.201809 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.201856 kubelet[3516]: W1105 23:44:41.201845 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.202138 kubelet[3516]: E1105 23:44:41.201877 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.203794 kubelet[3516]: E1105 23:44:41.203707 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.203794 kubelet[3516]: W1105 23:44:41.203782 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.203987 kubelet[3516]: E1105 23:44:41.203815 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.206935 kubelet[3516]: E1105 23:44:41.206872 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.206935 kubelet[3516]: W1105 23:44:41.206917 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.208379 kubelet[3516]: E1105 23:44:41.206950 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.208379 kubelet[3516]: E1105 23:44:41.207684 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.208379 kubelet[3516]: W1105 23:44:41.207711 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.209578 kubelet[3516]: E1105 23:44:41.209527 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.210669 kubelet[3516]: E1105 23:44:41.210570 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.210669 kubelet[3516]: W1105 23:44:41.210658 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.210917 kubelet[3516]: E1105 23:44:41.210708 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.211895 kubelet[3516]: E1105 23:44:41.211835 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.211895 kubelet[3516]: W1105 23:44:41.211872 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.212099 kubelet[3516]: E1105 23:44:41.211904 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.213815 kubelet[3516]: E1105 23:44:41.213751 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.213815 kubelet[3516]: W1105 23:44:41.213792 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.214031 kubelet[3516]: E1105 23:44:41.213825 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.215506 kubelet[3516]: E1105 23:44:41.215458 3516 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 23:44:41.215506 kubelet[3516]: W1105 23:44:41.215494 3516 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 23:44:41.217218 kubelet[3516]: E1105 23:44:41.215528 3516 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 23:44:41.504864 containerd[1978]: time="2025-11-05T23:44:41.504691187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:41.507747 containerd[1978]: time="2025-11-05T23:44:41.507684899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 23:44:41.508982 containerd[1978]: time="2025-11-05T23:44:41.508930343Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:41.514781 containerd[1978]: time="2025-11-05T23:44:41.514349735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:41.517917 containerd[1978]: time="2025-11-05T23:44:41.517860311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.215690918s" Nov 5 23:44:41.518161 containerd[1978]: time="2025-11-05T23:44:41.518130695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 23:44:41.528620 containerd[1978]: time="2025-11-05T23:44:41.528384971Z" level=info msg="CreateContainer within sandbox \"fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 23:44:41.547628 containerd[1978]: time="2025-11-05T23:44:41.546715151Z" level=info msg="Container b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:41.570495 containerd[1978]: time="2025-11-05T23:44:41.570435192Z" level=info msg="CreateContainer within sandbox \"fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071\"" Nov 5 23:44:41.571980 containerd[1978]: time="2025-11-05T23:44:41.571920492Z" level=info msg="StartContainer for \"b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071\"" Nov 5 23:44:41.575309 containerd[1978]: time="2025-11-05T23:44:41.575197896Z" level=info msg="connecting to shim b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071" address="unix:///run/containerd/s/2365d078e3effa4f62d2806b766bc49e83acb0ebf114c1ae0cb9c065286589c3" protocol=ttrpc version=3 Nov 5 23:44:41.617889 systemd[1]: Started cri-containerd-b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071.scope - libcontainer container b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071. Nov 5 23:44:41.695041 containerd[1978]: time="2025-11-05T23:44:41.694985616Z" level=info msg="StartContainer for \"b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071\" returns successfully" Nov 5 23:44:41.722142 systemd[1]: cri-containerd-b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071.scope: Deactivated successfully. Nov 5 23:44:41.730989 containerd[1978]: time="2025-11-05T23:44:41.730748904Z" level=info msg="received exit event container_id:\"b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071\" id:\"b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071\" pid:4213 exited_at:{seconds:1762386281 nanos:730200264}" Nov 5 23:44:41.731848 containerd[1978]: time="2025-11-05T23:44:41.731784528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071\" id:\"b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071\" pid:4213 exited_at:{seconds:1762386281 nanos:730200264}" Nov 5 23:44:41.782365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6b8930c8c5acb463d07b68efb9a8354e8201e0bb85dede88168acf4a26e9071-rootfs.mount: Deactivated successfully. Nov 5 23:44:41.831560 kubelet[3516]: E1105 23:44:41.831500 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:44:42.059259 kubelet[3516]: I1105 23:44:42.059170 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:44:42.103901 kubelet[3516]: I1105 23:44:42.103719 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d78957cbc-vzlfb" podStartSLOduration=3.063059216 podStartE2EDuration="5.10369573s" podCreationTimestamp="2025-11-05 23:44:37 +0000 UTC" firstStartedPulling="2025-11-05 23:44:38.260752663 +0000 UTC m=+29.807301329" lastFinishedPulling="2025-11-05 23:44:40.301389189 +0000 UTC m=+31.847937843" observedRunningTime="2025-11-05 23:44:41.151461945 +0000 UTC m=+32.698010611" watchObservedRunningTime="2025-11-05 23:44:42.10369573 +0000 UTC m=+33.650244420" Nov 5 23:44:43.070781 containerd[1978]: time="2025-11-05T23:44:43.070697111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 23:44:43.831853 kubelet[3516]: E1105 23:44:43.831779 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:44:45.831218 kubelet[3516]: E1105 23:44:45.831126 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:44:46.004025 containerd[1978]: time="2025-11-05T23:44:46.001950590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:46.004025 containerd[1978]: time="2025-11-05T23:44:46.003205346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 23:44:46.005622 containerd[1978]: time="2025-11-05T23:44:46.004848410Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:46.010953 containerd[1978]: time="2025-11-05T23:44:46.010884770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:46.013545 containerd[1978]: time="2025-11-05T23:44:46.013489694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.942731131s" Nov 5 23:44:46.013761 containerd[1978]: time="2025-11-05T23:44:46.013732970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 23:44:46.021540 containerd[1978]: time="2025-11-05T23:44:46.021465650Z" level=info msg="CreateContainer within sandbox \"fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 23:44:46.035094 containerd[1978]: time="2025-11-05T23:44:46.035011934Z" level=info msg="Container 2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:46.048628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508429959.mount: Deactivated successfully. Nov 5 23:44:46.060386 containerd[1978]: time="2025-11-05T23:44:46.060280898Z" level=info msg="CreateContainer within sandbox \"fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea\"" Nov 5 23:44:46.064045 containerd[1978]: time="2025-11-05T23:44:46.061920158Z" level=info msg="StartContainer for \"2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea\"" Nov 5 23:44:46.067313 containerd[1978]: time="2025-11-05T23:44:46.067258214Z" level=info msg="connecting to shim 2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea" address="unix:///run/containerd/s/2365d078e3effa4f62d2806b766bc49e83acb0ebf114c1ae0cb9c065286589c3" protocol=ttrpc version=3 Nov 5 23:44:46.113912 systemd[1]: Started cri-containerd-2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea.scope - libcontainer container 2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea. Nov 5 23:44:46.195416 containerd[1978]: time="2025-11-05T23:44:46.195368343Z" level=info msg="StartContainer for \"2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea\" returns successfully" Nov 5 23:44:46.644240 kubelet[3516]: I1105 23:44:46.643686 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:44:47.229440 containerd[1978]: time="2025-11-05T23:44:47.229150072Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 23:44:47.234929 systemd[1]: cri-containerd-2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea.scope: Deactivated successfully. Nov 5 23:44:47.235487 systemd[1]: cri-containerd-2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea.scope: Consumed 954ms CPU time, 185M memory peak, 165.9M written to disk. Nov 5 23:44:47.242898 containerd[1978]: time="2025-11-05T23:44:47.242828908Z" level=info msg="received exit event container_id:\"2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea\" id:\"2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea\" pid:4276 exited_at:{seconds:1762386287 nanos:242094976}" Nov 5 23:44:47.243495 containerd[1978]: time="2025-11-05T23:44:47.243377176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea\" id:\"2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea\" pid:4276 exited_at:{seconds:1762386287 nanos:242094976}" Nov 5 23:44:47.281108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2537a3c406dd41dec8e5fc83c240b60308230a9185df8a0cec3bf56dd7eb7cea-rootfs.mount: Deactivated successfully. Nov 5 23:44:47.329010 kubelet[3516]: I1105 23:44:47.328937 3516 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 23:44:47.443518 kubelet[3516]: I1105 23:44:47.443351 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff8dd25a-67c7-46ac-bb94-2c7271ca4123-config-volume\") pod \"coredns-674b8bbfcf-48gh4\" (UID: \"ff8dd25a-67c7-46ac-bb94-2c7271ca4123\") " pod="kube-system/coredns-674b8bbfcf-48gh4" Nov 5 23:44:47.444667 kubelet[3516]: I1105 23:44:47.444245 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57jl\" (UniqueName: \"kubernetes.io/projected/ff8dd25a-67c7-46ac-bb94-2c7271ca4123-kube-api-access-v57jl\") pod \"coredns-674b8bbfcf-48gh4\" (UID: \"ff8dd25a-67c7-46ac-bb94-2c7271ca4123\") " pod="kube-system/coredns-674b8bbfcf-48gh4" Nov 5 23:44:47.470064 systemd[1]: Created slice kubepods-burstable-podff8dd25a_67c7_46ac_bb94_2c7271ca4123.slice - libcontainer container kubepods-burstable-podff8dd25a_67c7_46ac_bb94_2c7271ca4123.slice. Nov 5 23:44:47.490356 systemd[1]: Created slice kubepods-burstable-pod8f56a730_0864_408e_a1fd_84792cfa18c7.slice - libcontainer container kubepods-burstable-pod8f56a730_0864_408e_a1fd_84792cfa18c7.slice. Nov 5 23:44:47.535497 systemd[1]: Created slice kubepods-besteffort-poda7fd47be_5341_4035_917c_acf91009ebea.slice - libcontainer container kubepods-besteffort-poda7fd47be_5341_4035_917c_acf91009ebea.slice. Nov 5 23:44:47.545955 kubelet[3516]: I1105 23:44:47.545875 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pm9l\" (UniqueName: \"kubernetes.io/projected/8f56a730-0864-408e-a1fd-84792cfa18c7-kube-api-access-6pm9l\") pod \"coredns-674b8bbfcf-s8282\" (UID: \"8f56a730-0864-408e-a1fd-84792cfa18c7\") " pod="kube-system/coredns-674b8bbfcf-s8282" Nov 5 23:44:47.545955 kubelet[3516]: I1105 23:44:47.545956 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8qb\" (UniqueName: \"kubernetes.io/projected/a7fd47be-5341-4035-917c-acf91009ebea-kube-api-access-qb8qb\") pod \"calico-kube-controllers-877d4847d-6rhkp\" (UID: \"a7fd47be-5341-4035-917c-acf91009ebea\") " pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" Nov 5 23:44:47.567922 kubelet[3516]: I1105 23:44:47.545999 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f56a730-0864-408e-a1fd-84792cfa18c7-config-volume\") pod \"coredns-674b8bbfcf-s8282\" (UID: \"8f56a730-0864-408e-a1fd-84792cfa18c7\") " pod="kube-system/coredns-674b8bbfcf-s8282" Nov 5 23:44:47.567922 kubelet[3516]: I1105 23:44:47.546112 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7fd47be-5341-4035-917c-acf91009ebea-tigera-ca-bundle\") pod \"calico-kube-controllers-877d4847d-6rhkp\" (UID: \"a7fd47be-5341-4035-917c-acf91009ebea\") " pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" Nov 5 23:44:47.617181 systemd[1]: Created slice kubepods-besteffort-pode8560696_892d_49eb_9fc3_5b381971f81d.slice - libcontainer container kubepods-besteffort-pode8560696_892d_49eb_9fc3_5b381971f81d.slice. Nov 5 23:44:47.647176 kubelet[3516]: I1105 23:44:47.646828 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-backend-key-pair\") pod \"whisker-7d7fbd4898-kcwg6\" (UID: \"e8560696-892d-49eb-9fc3-5b381971f81d\") " pod="calico-system/whisker-7d7fbd4898-kcwg6" Nov 5 23:44:47.647407 kubelet[3516]: I1105 23:44:47.647363 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trjkb\" (UniqueName: \"kubernetes.io/projected/e8560696-892d-49eb-9fc3-5b381971f81d-kube-api-access-trjkb\") pod \"whisker-7d7fbd4898-kcwg6\" (UID: \"e8560696-892d-49eb-9fc3-5b381971f81d\") " pod="calico-system/whisker-7d7fbd4898-kcwg6" Nov 5 23:44:47.648501 kubelet[3516]: I1105 23:44:47.647575 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-ca-bundle\") pod \"whisker-7d7fbd4898-kcwg6\" (UID: \"e8560696-892d-49eb-9fc3-5b381971f81d\") " pod="calico-system/whisker-7d7fbd4898-kcwg6" Nov 5 23:44:47.721424 systemd[1]: Created slice kubepods-besteffort-pod07a15442_dee2_4408_9286_ad45a221772c.slice - libcontainer container kubepods-besteffort-pod07a15442_dee2_4408_9286_ad45a221772c.slice. Nov 5 23:44:47.750237 kubelet[3516]: I1105 23:44:47.749138 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmztm\" (UniqueName: \"kubernetes.io/projected/07a15442-dee2-4408-9286-ad45a221772c-kube-api-access-fmztm\") pod \"calico-apiserver-759f658d45-5cwjh\" (UID: \"07a15442-dee2-4408-9286-ad45a221772c\") " pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" Nov 5 23:44:47.750237 kubelet[3516]: I1105 23:44:47.749274 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07a15442-dee2-4408-9286-ad45a221772c-calico-apiserver-certs\") pod \"calico-apiserver-759f658d45-5cwjh\" (UID: \"07a15442-dee2-4408-9286-ad45a221772c\") " pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" Nov 5 23:44:47.806965 containerd[1978]: time="2025-11-05T23:44:47.806849515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48gh4,Uid:ff8dd25a-67c7-46ac-bb94-2c7271ca4123,Namespace:kube-system,Attempt:0,}" Nov 5 23:44:47.807461 containerd[1978]: time="2025-11-05T23:44:47.807424567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8282,Uid:8f56a730-0864-408e-a1fd-84792cfa18c7,Namespace:kube-system,Attempt:0,}" Nov 5 23:44:47.848557 systemd[1]: Created slice kubepods-besteffort-pod76aeec5c_7e05_490c_a0a0_b95d9945b382.slice - libcontainer container kubepods-besteffort-pod76aeec5c_7e05_490c_a0a0_b95d9945b382.slice. Nov 5 23:44:47.853897 containerd[1978]: time="2025-11-05T23:44:47.853764463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877d4847d-6rhkp,Uid:a7fd47be-5341-4035-917c-acf91009ebea,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:47.858339 kubelet[3516]: I1105 23:44:47.858270 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f18444-9872-4d73-bb60-c66c73cdfaff-goldmane-ca-bundle\") pod \"goldmane-666569f655-v2m4t\" (UID: \"03f18444-9872-4d73-bb60-c66c73cdfaff\") " pod="calico-system/goldmane-666569f655-v2m4t" Nov 5 23:44:47.858462 kubelet[3516]: I1105 23:44:47.858362 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03f18444-9872-4d73-bb60-c66c73cdfaff-config\") pod \"goldmane-666569f655-v2m4t\" (UID: \"03f18444-9872-4d73-bb60-c66c73cdfaff\") " pod="calico-system/goldmane-666569f655-v2m4t" Nov 5 23:44:47.858462 kubelet[3516]: I1105 23:44:47.858405 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx6pr\" (UniqueName: \"kubernetes.io/projected/03f18444-9872-4d73-bb60-c66c73cdfaff-kube-api-access-nx6pr\") pod \"goldmane-666569f655-v2m4t\" (UID: \"03f18444-9872-4d73-bb60-c66c73cdfaff\") " pod="calico-system/goldmane-666569f655-v2m4t" Nov 5 23:44:47.858462 kubelet[3516]: I1105 23:44:47.858447 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/03f18444-9872-4d73-bb60-c66c73cdfaff-goldmane-key-pair\") pod \"goldmane-666569f655-v2m4t\" (UID: \"03f18444-9872-4d73-bb60-c66c73cdfaff\") " pod="calico-system/goldmane-666569f655-v2m4t" Nov 5 23:44:47.858672 kubelet[3516]: I1105 23:44:47.858489 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gwh\" (UniqueName: \"kubernetes.io/projected/76aeec5c-7e05-490c-a0a0-b95d9945b382-kube-api-access-l2gwh\") pod \"calico-apiserver-759f658d45-zzvbq\" (UID: \"76aeec5c-7e05-490c-a0a0-b95d9945b382\") " pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" Nov 5 23:44:47.858672 kubelet[3516]: I1105 23:44:47.858566 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/76aeec5c-7e05-490c-a0a0-b95d9945b382-calico-apiserver-certs\") pod \"calico-apiserver-759f658d45-zzvbq\" (UID: \"76aeec5c-7e05-490c-a0a0-b95d9945b382\") " pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" Nov 5 23:44:47.888765 systemd[1]: Created slice kubepods-besteffort-pod03f18444_9872_4d73_bb60_c66c73cdfaff.slice - libcontainer container kubepods-besteffort-pod03f18444_9872_4d73_bb60_c66c73cdfaff.slice. Nov 5 23:44:47.941683 systemd[1]: Created slice kubepods-besteffort-pod89a32bf2_ec2a_4f35_b294_b2467c662fb4.slice - libcontainer container kubepods-besteffort-pod89a32bf2_ec2a_4f35_b294_b2467c662fb4.slice. Nov 5 23:44:47.949004 containerd[1978]: time="2025-11-05T23:44:47.948829099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7km8q,Uid:89a32bf2-ec2a-4f35-b294-b2467c662fb4,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:47.949653 containerd[1978]: time="2025-11-05T23:44:47.949566031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d7fbd4898-kcwg6,Uid:e8560696-892d-49eb-9fc3-5b381971f81d,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:48.034153 containerd[1978]: time="2025-11-05T23:44:48.033949372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-5cwjh,Uid:07a15442-dee2-4408-9286-ad45a221772c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:48.134867 containerd[1978]: time="2025-11-05T23:44:48.134799544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 23:44:48.193288 containerd[1978]: time="2025-11-05T23:44:48.192836572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-zzvbq,Uid:76aeec5c-7e05-490c-a0a0-b95d9945b382,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:44:48.226679 containerd[1978]: time="2025-11-05T23:44:48.224144057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v2m4t,Uid:03f18444-9872-4d73-bb60-c66c73cdfaff,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:48.385145 containerd[1978]: time="2025-11-05T23:44:48.385065785Z" level=error msg="Failed to destroy network for sandbox \"70fc9fd945228740644d1dd86e9066ddbc1b4f51e12067aff58d5476ee3c47f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.387804 containerd[1978]: time="2025-11-05T23:44:48.387695729Z" level=error msg="Failed to destroy network for sandbox \"e1824daf79b2903677903ab211e78fa844b3324da07fbc857961bcad057dd68a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.391726 systemd[1]: run-netns-cni\x2d443c2350\x2d6545\x2d907e\x2db747\x2de82023150b9f.mount: Deactivated successfully. Nov 5 23:44:48.397982 containerd[1978]: time="2025-11-05T23:44:48.395890277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8282,Uid:8f56a730-0864-408e-a1fd-84792cfa18c7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70fc9fd945228740644d1dd86e9066ddbc1b4f51e12067aff58d5476ee3c47f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.400892 kubelet[3516]: E1105 23:44:48.399475 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70fc9fd945228740644d1dd86e9066ddbc1b4f51e12067aff58d5476ee3c47f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.400892 kubelet[3516]: E1105 23:44:48.399571 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70fc9fd945228740644d1dd86e9066ddbc1b4f51e12067aff58d5476ee3c47f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s8282" Nov 5 23:44:48.400892 kubelet[3516]: E1105 23:44:48.399634 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70fc9fd945228740644d1dd86e9066ddbc1b4f51e12067aff58d5476ee3c47f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s8282" Nov 5 23:44:48.403181 kubelet[3516]: E1105 23:44:48.399728 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-s8282_kube-system(8f56a730-0864-408e-a1fd-84792cfa18c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-s8282_kube-system(8f56a730-0864-408e-a1fd-84792cfa18c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70fc9fd945228740644d1dd86e9066ddbc1b4f51e12067aff58d5476ee3c47f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-s8282" podUID="8f56a730-0864-408e-a1fd-84792cfa18c7" Nov 5 23:44:48.406838 kubelet[3516]: E1105 23:44:48.406520 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1824daf79b2903677903ab211e78fa844b3324da07fbc857961bcad057dd68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.406838 kubelet[3516]: E1105 23:44:48.406633 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1824daf79b2903677903ab211e78fa844b3324da07fbc857961bcad057dd68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-48gh4" Nov 5 23:44:48.406838 kubelet[3516]: E1105 23:44:48.406670 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1824daf79b2903677903ab211e78fa844b3324da07fbc857961bcad057dd68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-48gh4" Nov 5 23:44:48.409502 containerd[1978]: time="2025-11-05T23:44:48.400020809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48gh4,Uid:ff8dd25a-67c7-46ac-bb94-2c7271ca4123,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1824daf79b2903677903ab211e78fa844b3324da07fbc857961bcad057dd68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.403780 systemd[1]: run-netns-cni\x2d2738d9d1\x2d3afa\x2d684c\x2d1669\x2dd5999dc59ee3.mount: Deactivated successfully. Nov 5 23:44:48.410021 kubelet[3516]: E1105 23:44:48.407153 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-48gh4_kube-system(ff8dd25a-67c7-46ac-bb94-2c7271ca4123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-48gh4_kube-system(ff8dd25a-67c7-46ac-bb94-2c7271ca4123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1824daf79b2903677903ab211e78fa844b3324da07fbc857961bcad057dd68a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-48gh4" podUID="ff8dd25a-67c7-46ac-bb94-2c7271ca4123" Nov 5 23:44:48.430812 containerd[1978]: time="2025-11-05T23:44:48.430729374Z" level=error msg="Failed to destroy network for sandbox \"82e385366ac5a0cc1bbe04c5e44f7499bf00f5615699e23236537194e424ba27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.438821 systemd[1]: run-netns-cni\x2d65081e10\x2d7d1c\x2d2bef\x2dc9a8\x2dc42372401737.mount: Deactivated successfully. Nov 5 23:44:48.443462 containerd[1978]: time="2025-11-05T23:44:48.443236770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877d4847d-6rhkp,Uid:a7fd47be-5341-4035-917c-acf91009ebea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e385366ac5a0cc1bbe04c5e44f7499bf00f5615699e23236537194e424ba27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.444036 kubelet[3516]: E1105 23:44:48.443975 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e385366ac5a0cc1bbe04c5e44f7499bf00f5615699e23236537194e424ba27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.444291 kubelet[3516]: E1105 23:44:48.444060 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e385366ac5a0cc1bbe04c5e44f7499bf00f5615699e23236537194e424ba27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" Nov 5 23:44:48.444291 kubelet[3516]: E1105 23:44:48.444097 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e385366ac5a0cc1bbe04c5e44f7499bf00f5615699e23236537194e424ba27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" Nov 5 23:44:48.444291 kubelet[3516]: E1105 23:44:48.444186 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-877d4847d-6rhkp_calico-system(a7fd47be-5341-4035-917c-acf91009ebea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-877d4847d-6rhkp_calico-system(a7fd47be-5341-4035-917c-acf91009ebea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82e385366ac5a0cc1bbe04c5e44f7499bf00f5615699e23236537194e424ba27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:44:48.507861 containerd[1978]: time="2025-11-05T23:44:48.507762210Z" level=error msg="Failed to destroy network for sandbox \"cca4d58b1860ab40fd2b4cd20e60da0f316fe4241b7524fba4b12f82ee0ace3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.514253 systemd[1]: run-netns-cni\x2d29ff9723\x2d7233\x2d5a19\x2dd1d6\x2d1ac885f4d51e.mount: Deactivated successfully. Nov 5 23:44:48.518581 containerd[1978]: time="2025-11-05T23:44:48.518194878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-5cwjh,Uid:07a15442-dee2-4408-9286-ad45a221772c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca4d58b1860ab40fd2b4cd20e60da0f316fe4241b7524fba4b12f82ee0ace3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.518831 kubelet[3516]: E1105 23:44:48.518553 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca4d58b1860ab40fd2b4cd20e60da0f316fe4241b7524fba4b12f82ee0ace3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.518831 kubelet[3516]: E1105 23:44:48.518650 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca4d58b1860ab40fd2b4cd20e60da0f316fe4241b7524fba4b12f82ee0ace3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" Nov 5 23:44:48.518831 kubelet[3516]: E1105 23:44:48.518691 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca4d58b1860ab40fd2b4cd20e60da0f316fe4241b7524fba4b12f82ee0ace3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" Nov 5 23:44:48.520404 kubelet[3516]: E1105 23:44:48.518783 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-759f658d45-5cwjh_calico-apiserver(07a15442-dee2-4408-9286-ad45a221772c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-759f658d45-5cwjh_calico-apiserver(07a15442-dee2-4408-9286-ad45a221772c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cca4d58b1860ab40fd2b4cd20e60da0f316fe4241b7524fba4b12f82ee0ace3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:44:48.532725 containerd[1978]: time="2025-11-05T23:44:48.532619082Z" level=error msg="Failed to destroy network for sandbox \"fbe57d5bf82c3fff57f818b3beeeb21651d2582d62aff30c57b3ad11e7fe4d82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.533699 containerd[1978]: time="2025-11-05T23:44:48.533450118Z" level=error msg="Failed to destroy network for sandbox \"a14353cd7baacd39dd573c62e8fe719bc4b1569c2240c5b8403644f89f925a6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.536537 containerd[1978]: time="2025-11-05T23:44:48.536473710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d7fbd4898-kcwg6,Uid:e8560696-892d-49eb-9fc3-5b381971f81d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe57d5bf82c3fff57f818b3beeeb21651d2582d62aff30c57b3ad11e7fe4d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.537216 kubelet[3516]: E1105 23:44:48.537145 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe57d5bf82c3fff57f818b3beeeb21651d2582d62aff30c57b3ad11e7fe4d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.537464 kubelet[3516]: E1105 23:44:48.537233 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe57d5bf82c3fff57f818b3beeeb21651d2582d62aff30c57b3ad11e7fe4d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d7fbd4898-kcwg6" Nov 5 23:44:48.537464 kubelet[3516]: E1105 23:44:48.537269 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbe57d5bf82c3fff57f818b3beeeb21651d2582d62aff30c57b3ad11e7fe4d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d7fbd4898-kcwg6" Nov 5 23:44:48.537464 kubelet[3516]: E1105 23:44:48.537341 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d7fbd4898-kcwg6_calico-system(e8560696-892d-49eb-9fc3-5b381971f81d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d7fbd4898-kcwg6_calico-system(e8560696-892d-49eb-9fc3-5b381971f81d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbe57d5bf82c3fff57f818b3beeeb21651d2582d62aff30c57b3ad11e7fe4d82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d7fbd4898-kcwg6" podUID="e8560696-892d-49eb-9fc3-5b381971f81d" Nov 5 23:44:48.539470 containerd[1978]: time="2025-11-05T23:44:48.538879566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7km8q,Uid:89a32bf2-ec2a-4f35-b294-b2467c662fb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14353cd7baacd39dd573c62e8fe719bc4b1569c2240c5b8403644f89f925a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.540070 kubelet[3516]: E1105 23:44:48.539482 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14353cd7baacd39dd573c62e8fe719bc4b1569c2240c5b8403644f89f925a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.540070 kubelet[3516]: E1105 23:44:48.539549 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14353cd7baacd39dd573c62e8fe719bc4b1569c2240c5b8403644f89f925a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7km8q" Nov 5 23:44:48.540070 kubelet[3516]: E1105 23:44:48.539581 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a14353cd7baacd39dd573c62e8fe719bc4b1569c2240c5b8403644f89f925a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7km8q" Nov 5 23:44:48.540817 kubelet[3516]: E1105 23:44:48.540737 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a14353cd7baacd39dd573c62e8fe719bc4b1569c2240c5b8403644f89f925a6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:44:48.573042 containerd[1978]: time="2025-11-05T23:44:48.572971110Z" level=error msg="Failed to destroy network for sandbox \"1369b27c23ffcb0f0311182b0a4d489177acf5688a1ec598d1a26bd51750c10a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.576245 containerd[1978]: time="2025-11-05T23:44:48.576148014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-zzvbq,Uid:76aeec5c-7e05-490c-a0a0-b95d9945b382,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1369b27c23ffcb0f0311182b0a4d489177acf5688a1ec598d1a26bd51750c10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.577783 kubelet[3516]: E1105 23:44:48.576811 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1369b27c23ffcb0f0311182b0a4d489177acf5688a1ec598d1a26bd51750c10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.577783 kubelet[3516]: E1105 23:44:48.577506 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1369b27c23ffcb0f0311182b0a4d489177acf5688a1ec598d1a26bd51750c10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" Nov 5 23:44:48.577783 kubelet[3516]: E1105 23:44:48.577607 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1369b27c23ffcb0f0311182b0a4d489177acf5688a1ec598d1a26bd51750c10a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" Nov 5 23:44:48.578400 kubelet[3516]: E1105 23:44:48.577747 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-759f658d45-zzvbq_calico-apiserver(76aeec5c-7e05-490c-a0a0-b95d9945b382)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-759f658d45-zzvbq_calico-apiserver(76aeec5c-7e05-490c-a0a0-b95d9945b382)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1369b27c23ffcb0f0311182b0a4d489177acf5688a1ec598d1a26bd51750c10a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:44:48.584611 containerd[1978]: time="2025-11-05T23:44:48.584523690Z" level=error msg="Failed to destroy network for sandbox \"2e7f47b982d8b10e4e5cb23f96e43cf797305178414695a2b17654a40e99b1f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.587200 containerd[1978]: time="2025-11-05T23:44:48.587120442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v2m4t,Uid:03f18444-9872-4d73-bb60-c66c73cdfaff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7f47b982d8b10e4e5cb23f96e43cf797305178414695a2b17654a40e99b1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.587802 kubelet[3516]: E1105 23:44:48.587753 3516 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7f47b982d8b10e4e5cb23f96e43cf797305178414695a2b17654a40e99b1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 23:44:48.588014 kubelet[3516]: E1105 23:44:48.587981 3516 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7f47b982d8b10e4e5cb23f96e43cf797305178414695a2b17654a40e99b1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v2m4t" Nov 5 23:44:48.588157 kubelet[3516]: E1105 23:44:48.588127 3516 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7f47b982d8b10e4e5cb23f96e43cf797305178414695a2b17654a40e99b1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-v2m4t" Nov 5 23:44:48.588337 kubelet[3516]: E1105 23:44:48.588299 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-v2m4t_calico-system(03f18444-9872-4d73-bb60-c66c73cdfaff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-v2m4t_calico-system(03f18444-9872-4d73-bb60-c66c73cdfaff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e7f47b982d8b10e4e5cb23f96e43cf797305178414695a2b17654a40e99b1f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:44:49.281551 systemd[1]: run-netns-cni\x2dbf48be95\x2d4503\x2d742a\x2dc441\x2d57a0ea9864a8.mount: Deactivated successfully. Nov 5 23:44:49.281946 systemd[1]: run-netns-cni\x2d1b9bb27c\x2d242e\x2d77fe\x2d5a07\x2dca824254f2bd.mount: Deactivated successfully. Nov 5 23:44:49.282232 systemd[1]: run-netns-cni\x2d1f5b9642\x2dc198\x2d6911\x2d1e8b\x2da7b702ebd82b.mount: Deactivated successfully. Nov 5 23:44:49.282455 systemd[1]: run-netns-cni\x2d6c4cb1d8\x2de4a1\x2da895\x2dabc8\x2d0ac40b0de8d8.mount: Deactivated successfully. Nov 5 23:44:54.717306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963742341.mount: Deactivated successfully. Nov 5 23:44:54.774930 containerd[1978]: time="2025-11-05T23:44:54.774767041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:54.777286 containerd[1978]: time="2025-11-05T23:44:54.777218737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 23:44:54.779523 containerd[1978]: time="2025-11-05T23:44:54.779464297Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:54.785543 containerd[1978]: time="2025-11-05T23:44:54.785469205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 23:44:54.787911 containerd[1978]: time="2025-11-05T23:44:54.787852357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.652983369s" Nov 5 23:44:54.787911 containerd[1978]: time="2025-11-05T23:44:54.787907797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 23:44:54.826095 containerd[1978]: time="2025-11-05T23:44:54.826017433Z" level=info msg="CreateContainer within sandbox \"fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 23:44:54.854347 containerd[1978]: time="2025-11-05T23:44:54.852889514Z" level=info msg="Container d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:44:54.892280 containerd[1978]: time="2025-11-05T23:44:54.892229534Z" level=info msg="CreateContainer within sandbox \"fd8df971a52d0d9d6ec648a7f881093aa7ab5869308fa068ceb6fdef271af924\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3\"" Nov 5 23:44:54.893928 containerd[1978]: time="2025-11-05T23:44:54.893882042Z" level=info msg="StartContainer for \"d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3\"" Nov 5 23:44:54.899132 containerd[1978]: time="2025-11-05T23:44:54.899069786Z" level=info msg="connecting to shim d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3" address="unix:///run/containerd/s/2365d078e3effa4f62d2806b766bc49e83acb0ebf114c1ae0cb9c065286589c3" protocol=ttrpc version=3 Nov 5 23:44:54.945038 systemd[1]: Started cri-containerd-d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3.scope - libcontainer container d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3. Nov 5 23:44:55.043269 containerd[1978]: time="2025-11-05T23:44:55.043222282Z" level=info msg="StartContainer for \"d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3\" returns successfully" Nov 5 23:44:55.230618 kubelet[3516]: I1105 23:44:55.230271 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-927ws" podStartSLOduration=1.957138226 podStartE2EDuration="18.230244287s" podCreationTimestamp="2025-11-05 23:44:37 +0000 UTC" firstStartedPulling="2025-11-05 23:44:38.515982536 +0000 UTC m=+30.062531202" lastFinishedPulling="2025-11-05 23:44:54.789088609 +0000 UTC m=+46.335637263" observedRunningTime="2025-11-05 23:44:55.229241891 +0000 UTC m=+46.775790569" watchObservedRunningTime="2025-11-05 23:44:55.230244287 +0000 UTC m=+46.776792965" Nov 5 23:44:55.334388 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 23:44:55.334531 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 23:44:55.618897 kubelet[3516]: I1105 23:44:55.618816 3516 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-backend-key-pair\") pod \"e8560696-892d-49eb-9fc3-5b381971f81d\" (UID: \"e8560696-892d-49eb-9fc3-5b381971f81d\") " Nov 5 23:44:55.619272 kubelet[3516]: I1105 23:44:55.619116 3516 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trjkb\" (UniqueName: \"kubernetes.io/projected/e8560696-892d-49eb-9fc3-5b381971f81d-kube-api-access-trjkb\") pod \"e8560696-892d-49eb-9fc3-5b381971f81d\" (UID: \"e8560696-892d-49eb-9fc3-5b381971f81d\") " Nov 5 23:44:55.619272 kubelet[3516]: I1105 23:44:55.619218 3516 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-ca-bundle\") pod \"e8560696-892d-49eb-9fc3-5b381971f81d\" (UID: \"e8560696-892d-49eb-9fc3-5b381971f81d\") " Nov 5 23:44:55.620459 kubelet[3516]: I1105 23:44:55.620408 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e8560696-892d-49eb-9fc3-5b381971f81d" (UID: "e8560696-892d-49eb-9fc3-5b381971f81d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 23:44:55.629644 kubelet[3516]: I1105 23:44:55.629548 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e8560696-892d-49eb-9fc3-5b381971f81d" (UID: "e8560696-892d-49eb-9fc3-5b381971f81d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 23:44:55.634900 kubelet[3516]: I1105 23:44:55.634842 3516 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8560696-892d-49eb-9fc3-5b381971f81d-kube-api-access-trjkb" (OuterVolumeSpecName: "kube-api-access-trjkb") pod "e8560696-892d-49eb-9fc3-5b381971f81d" (UID: "e8560696-892d-49eb-9fc3-5b381971f81d"). InnerVolumeSpecName "kube-api-access-trjkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 23:44:55.720388 kubelet[3516]: I1105 23:44:55.720278 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-backend-key-pair\") on node \"ip-172-31-26-188\" DevicePath \"\"" Nov 5 23:44:55.720388 kubelet[3516]: I1105 23:44:55.720330 3516 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trjkb\" (UniqueName: \"kubernetes.io/projected/e8560696-892d-49eb-9fc3-5b381971f81d-kube-api-access-trjkb\") on node \"ip-172-31-26-188\" DevicePath \"\"" Nov 5 23:44:55.720388 kubelet[3516]: I1105 23:44:55.720353 3516 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8560696-892d-49eb-9fc3-5b381971f81d-whisker-ca-bundle\") on node \"ip-172-31-26-188\" DevicePath \"\"" Nov 5 23:44:55.723582 systemd[1]: var-lib-kubelet-pods-e8560696\x2d892d\x2d49eb\x2d9fc3\x2d5b381971f81d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtrjkb.mount: Deactivated successfully. Nov 5 23:44:55.724010 systemd[1]: var-lib-kubelet-pods-e8560696\x2d892d\x2d49eb\x2d9fc3\x2d5b381971f81d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 23:44:56.180014 kubelet[3516]: I1105 23:44:56.179927 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:44:56.192553 systemd[1]: Removed slice kubepods-besteffort-pode8560696_892d_49eb_9fc3_5b381971f81d.slice - libcontainer container kubepods-besteffort-pode8560696_892d_49eb_9fc3_5b381971f81d.slice. Nov 5 23:44:56.315506 systemd[1]: Created slice kubepods-besteffort-pod20efcd41_054c_4821_9d54_ac97d532abc5.slice - libcontainer container kubepods-besteffort-pod20efcd41_054c_4821_9d54_ac97d532abc5.slice. Nov 5 23:44:56.325678 kubelet[3516]: I1105 23:44:56.325127 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20efcd41-054c-4821-9d54-ac97d532abc5-whisker-ca-bundle\") pod \"whisker-84c996d876-qnn6p\" (UID: \"20efcd41-054c-4821-9d54-ac97d532abc5\") " pod="calico-system/whisker-84c996d876-qnn6p" Nov 5 23:44:56.327835 kubelet[3516]: I1105 23:44:56.326730 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqpnk\" (UniqueName: \"kubernetes.io/projected/20efcd41-054c-4821-9d54-ac97d532abc5-kube-api-access-xqpnk\") pod \"whisker-84c996d876-qnn6p\" (UID: \"20efcd41-054c-4821-9d54-ac97d532abc5\") " pod="calico-system/whisker-84c996d876-qnn6p" Nov 5 23:44:56.327835 kubelet[3516]: I1105 23:44:56.327743 3516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20efcd41-054c-4821-9d54-ac97d532abc5-whisker-backend-key-pair\") pod \"whisker-84c996d876-qnn6p\" (UID: \"20efcd41-054c-4821-9d54-ac97d532abc5\") " pod="calico-system/whisker-84c996d876-qnn6p" Nov 5 23:44:56.623464 containerd[1978]: time="2025-11-05T23:44:56.623387210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c996d876-qnn6p,Uid:20efcd41-054c-4821-9d54-ac97d532abc5,Namespace:calico-system,Attempt:0,}" Nov 5 23:44:56.838335 kubelet[3516]: I1105 23:44:56.838027 3516 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8560696-892d-49eb-9fc3-5b381971f81d" path="/var/lib/kubelet/pods/e8560696-892d-49eb-9fc3-5b381971f81d/volumes" Nov 5 23:44:56.930709 (udev-worker)[4572]: Network interface NamePolicy= disabled on kernel command line. Nov 5 23:44:56.934009 systemd-networkd[1825]: calia7a022cecef: Link UP Nov 5 23:44:56.936449 systemd-networkd[1825]: calia7a022cecef: Gained carrier Nov 5 23:44:56.975417 containerd[1978]: 2025-11-05 23:44:56.673 [INFO][4601] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 23:44:56.975417 containerd[1978]: 2025-11-05 23:44:56.746 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0 whisker-84c996d876- calico-system 20efcd41-054c-4821-9d54-ac97d532abc5 939 0 2025-11-05 23:44:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84c996d876 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-188 whisker-84c996d876-qnn6p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia7a022cecef [] [] }} ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-" Nov 5 23:44:56.975417 containerd[1978]: 2025-11-05 23:44:56.747 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" Nov 5 23:44:56.975417 containerd[1978]: 2025-11-05 23:44:56.842 [INFO][4612] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" HandleID="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Workload="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.843 [INFO][4612] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" HandleID="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Workload="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003304f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-188", "pod":"whisker-84c996d876-qnn6p", "timestamp":"2025-11-05 23:44:56.842570931 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.843 [INFO][4612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.843 [INFO][4612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.843 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.859 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" host="ip-172-31-26-188" Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.868 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.876 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.880 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:44:56.976265 containerd[1978]: 2025-11-05 23:44:56.884 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:44:56.977132 containerd[1978]: 2025-11-05 23:44:56.884 [INFO][4612] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" host="ip-172-31-26-188" Nov 5 23:44:56.977132 containerd[1978]: 2025-11-05 23:44:56.889 [INFO][4612] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee Nov 5 23:44:56.977132 containerd[1978]: 2025-11-05 23:44:56.896 [INFO][4612] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" host="ip-172-31-26-188" Nov 5 23:44:56.977132 containerd[1978]: 2025-11-05 23:44:56.905 [INFO][4612] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.65/26] block=192.168.34.64/26 handle="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" host="ip-172-31-26-188" Nov 5 23:44:56.977132 containerd[1978]: 2025-11-05 23:44:56.906 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.65/26] handle="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" host="ip-172-31-26-188" Nov 5 23:44:56.977132 containerd[1978]: 2025-11-05 23:44:56.906 [INFO][4612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:44:56.977132 containerd[1978]: 2025-11-05 23:44:56.906 [INFO][4612] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.65/26] IPv6=[] ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" HandleID="k8s-pod-network.d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Workload="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" Nov 5 23:44:56.977467 containerd[1978]: 2025-11-05 23:44:56.914 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0", GenerateName:"whisker-84c996d876-", Namespace:"calico-system", SelfLink:"", UID:"20efcd41-054c-4821-9d54-ac97d532abc5", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c996d876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"whisker-84c996d876-qnn6p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia7a022cecef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:44:56.977467 containerd[1978]: 2025-11-05 23:44:56.915 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.65/32] ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" Nov 5 23:44:56.981755 containerd[1978]: 2025-11-05 23:44:56.915 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7a022cecef ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" Nov 5 23:44:56.981755 containerd[1978]: 2025-11-05 23:44:56.937 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" Nov 5 23:44:56.982357 containerd[1978]: 2025-11-05 23:44:56.939 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0", GenerateName:"whisker-84c996d876-", Namespace:"calico-system", SelfLink:"", UID:"20efcd41-054c-4821-9d54-ac97d532abc5", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c996d876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee", Pod:"whisker-84c996d876-qnn6p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia7a022cecef", MAC:"16:e1:59:44:af:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:44:56.983210 containerd[1978]: 2025-11-05 23:44:56.969 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" Namespace="calico-system" Pod="whisker-84c996d876-qnn6p" WorkloadEndpoint="ip--172--31--26--188-k8s-whisker--84c996d876--qnn6p-eth0" Nov 5 23:44:57.040791 containerd[1978]: time="2025-11-05T23:44:57.040719696Z" level=info msg="connecting to shim d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee" address="unix:///run/containerd/s/e0e0a86d81e76b0810bdb89f8029610b5d0cd01f178bbb23753f8c88c806e4ec" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:44:57.150532 systemd[1]: Started cri-containerd-d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee.scope - libcontainer container d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee. Nov 5 23:44:57.443093 containerd[1978]: time="2025-11-05T23:44:57.442986398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c996d876-qnn6p,Uid:20efcd41-054c-4821-9d54-ac97d532abc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9acb48204cac5ea8d43ef4bf6b702a9c861583f44d96c49f4d0819a69e831ee\"" Nov 5 23:44:57.450920 containerd[1978]: time="2025-11-05T23:44:57.450839402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:44:57.751802 containerd[1978]: time="2025-11-05T23:44:57.751502296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:44:57.756714 containerd[1978]: time="2025-11-05T23:44:57.756478948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:44:57.756714 containerd[1978]: time="2025-11-05T23:44:57.756481816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:44:57.757249 kubelet[3516]: E1105 23:44:57.757186 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:44:57.759167 kubelet[3516]: E1105 23:44:57.757437 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:44:57.761924 kubelet[3516]: E1105 23:44:57.761795 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:07d3aeb01bc54febada3000b29641fd0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqpnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c996d876-qnn6p_calico-system(20efcd41-054c-4821-9d54-ac97d532abc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:44:57.766122 containerd[1978]: time="2025-11-05T23:44:57.766047040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:44:58.052908 containerd[1978]: time="2025-11-05T23:44:58.052815385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:44:58.055234 containerd[1978]: time="2025-11-05T23:44:58.055146829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:44:58.055381 containerd[1978]: time="2025-11-05T23:44:58.055288537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:44:58.056667 kubelet[3516]: E1105 23:44:58.055641 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:44:58.056861 kubelet[3516]: E1105 23:44:58.056725 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:44:58.057451 kubelet[3516]: E1105 23:44:58.057009 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqpnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c996d876-qnn6p_calico-system(20efcd41-054c-4821-9d54-ac97d532abc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:44:58.059073 kubelet[3516]: E1105 23:44:58.058943 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:44:58.138779 systemd-networkd[1825]: calia7a022cecef: Gained IPv6LL Nov 5 23:44:58.197817 kubelet[3516]: E1105 23:44:58.197448 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:44:58.499294 systemd-networkd[1825]: vxlan.calico: Link UP Nov 5 23:44:58.499335 systemd-networkd[1825]: vxlan.calico: Gained carrier Nov 5 23:44:58.541426 (udev-worker)[4574]: Network interface NamePolicy= disabled on kernel command line. Nov 5 23:44:58.626837 kubelet[3516]: I1105 23:44:58.626772 3516 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 23:44:58.944756 containerd[1978]: time="2025-11-05T23:44:58.944580102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3\" id:\"79af16d18d0bb5189711b552fb55d9f0928aa4653443e7d50e289753996826be\" pid:4845 exit_status:1 exited_at:{seconds:1762386298 nanos:944104122}" Nov 5 23:44:59.207828 kubelet[3516]: E1105 23:44:59.207296 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:44:59.233389 containerd[1978]: time="2025-11-05T23:44:59.233279691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3\" id:\"7728b8acccc62f6f6dffed80719394a83c4e7a10511501324bf08fc816edd89a\" pid:4869 exit_status:1 exited_at:{seconds:1762386299 nanos:231239091}" Nov 5 23:45:00.315836 systemd-networkd[1825]: vxlan.calico: Gained IPv6LL Nov 5 23:45:00.833385 containerd[1978]: time="2025-11-05T23:45:00.833283463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7km8q,Uid:89a32bf2-ec2a-4f35-b294-b2467c662fb4,Namespace:calico-system,Attempt:0,}" Nov 5 23:45:01.048277 (udev-worker)[4831]: Network interface NamePolicy= disabled on kernel command line. Nov 5 23:45:01.050520 systemd-networkd[1825]: cali0168852603c: Link UP Nov 5 23:45:01.054052 systemd-networkd[1825]: cali0168852603c: Gained carrier Nov 5 23:45:01.088126 containerd[1978]: 2025-11-05 23:45:00.911 [INFO][4917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0 csi-node-driver- calico-system 89a32bf2-ec2a-4f35-b294-b2467c662fb4 770 0 2025-11-05 23:44:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-188 csi-node-driver-7km8q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0168852603c [] [] }} ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-" Nov 5 23:45:01.088126 containerd[1978]: 2025-11-05 23:45:00.911 [INFO][4917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" Nov 5 23:45:01.088126 containerd[1978]: 2025-11-05 23:45:00.970 [INFO][4928] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" HandleID="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Workload="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:00.971 [INFO][4928] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" HandleID="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Workload="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c95e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-188", "pod":"csi-node-driver-7km8q", "timestamp":"2025-11-05 23:45:00.97084418 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:00.971 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:00.973 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:00.973 [INFO][4928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:00.990 [INFO][4928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" host="ip-172-31-26-188" Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:01.003 [INFO][4928] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:01.011 [INFO][4928] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:01.015 [INFO][4928] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:01.088556 containerd[1978]: 2025-11-05 23:45:01.019 [INFO][4928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:01.090326 containerd[1978]: 2025-11-05 23:45:01.019 [INFO][4928] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" host="ip-172-31-26-188" Nov 5 23:45:01.090326 containerd[1978]: 2025-11-05 23:45:01.021 [INFO][4928] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732 Nov 5 23:45:01.090326 containerd[1978]: 2025-11-05 23:45:01.030 [INFO][4928] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" host="ip-172-31-26-188" Nov 5 23:45:01.090326 containerd[1978]: 2025-11-05 23:45:01.039 [INFO][4928] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.66/26] block=192.168.34.64/26 handle="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" host="ip-172-31-26-188" Nov 5 23:45:01.090326 containerd[1978]: 2025-11-05 23:45:01.039 [INFO][4928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.66/26] handle="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" host="ip-172-31-26-188" Nov 5 23:45:01.090326 containerd[1978]: 2025-11-05 23:45:01.039 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:01.090326 containerd[1978]: 2025-11-05 23:45:01.039 [INFO][4928] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.66/26] IPv6=[] ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" HandleID="k8s-pod-network.cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Workload="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" Nov 5 23:45:01.091669 containerd[1978]: 2025-11-05 23:45:01.045 [INFO][4917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89a32bf2-ec2a-4f35-b294-b2467c662fb4", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"csi-node-driver-7km8q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0168852603c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:01.091895 containerd[1978]: 2025-11-05 23:45:01.045 [INFO][4917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.66/32] ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" Nov 5 23:45:01.091895 containerd[1978]: 2025-11-05 23:45:01.045 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0168852603c ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" Nov 5 23:45:01.091895 containerd[1978]: 2025-11-05 23:45:01.055 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" Nov 5 23:45:01.092371 containerd[1978]: 2025-11-05 23:45:01.057 [INFO][4917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89a32bf2-ec2a-4f35-b294-b2467c662fb4", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732", Pod:"csi-node-driver-7km8q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0168852603c", MAC:"ba:2a:5a:d4:f4:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:01.092540 containerd[1978]: 2025-11-05 23:45:01.082 [INFO][4917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" Namespace="calico-system" Pod="csi-node-driver-7km8q" WorkloadEndpoint="ip--172--31--26--188-k8s-csi--node--driver--7km8q-eth0" Nov 5 23:45:01.145913 containerd[1978]: time="2025-11-05T23:45:01.145836965Z" level=info msg="connecting to shim cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732" address="unix:///run/containerd/s/9155922ecde00b837b529c472ec7bfb73b63cf4017aedc235bca8b65c4f20171" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:01.197927 systemd[1]: Started cri-containerd-cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732.scope - libcontainer container cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732. Nov 5 23:45:01.251620 containerd[1978]: time="2025-11-05T23:45:01.251533001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7km8q,Uid:89a32bf2-ec2a-4f35-b294-b2467c662fb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc8db7686cd03b1eb291ab9e12efb2beccdea5fa968c11aaec05729d3919d732\"" Nov 5 23:45:01.254891 containerd[1978]: time="2025-11-05T23:45:01.254779277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:45:01.601448 containerd[1978]: time="2025-11-05T23:45:01.601378855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:01.603965 containerd[1978]: time="2025-11-05T23:45:01.603902023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:45:01.604231 containerd[1978]: time="2025-11-05T23:45:01.603948823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:45:01.605404 kubelet[3516]: E1105 23:45:01.604520 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:01.605404 kubelet[3516]: E1105 23:45:01.604581 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:01.605404 kubelet[3516]: E1105 23:45:01.604804 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:01.608257 containerd[1978]: time="2025-11-05T23:45:01.608170255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:45:01.833641 containerd[1978]: time="2025-11-05T23:45:01.833543804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8282,Uid:8f56a730-0864-408e-a1fd-84792cfa18c7,Namespace:kube-system,Attempt:0,}" Nov 5 23:45:01.834998 containerd[1978]: time="2025-11-05T23:45:01.834269852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877d4847d-6rhkp,Uid:a7fd47be-5341-4035-917c-acf91009ebea,Namespace:calico-system,Attempt:0,}" Nov 5 23:45:01.834998 containerd[1978]: time="2025-11-05T23:45:01.834504344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48gh4,Uid:ff8dd25a-67c7-46ac-bb94-2c7271ca4123,Namespace:kube-system,Attempt:0,}" Nov 5 23:45:01.870439 containerd[1978]: time="2025-11-05T23:45:01.870261032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:01.879508 containerd[1978]: time="2025-11-05T23:45:01.879185540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:45:01.879508 containerd[1978]: time="2025-11-05T23:45:01.879326708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:45:01.879797 kubelet[3516]: E1105 23:45:01.879521 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:01.879797 kubelet[3516]: E1105 23:45:01.879611 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:01.880080 kubelet[3516]: E1105 23:45:01.879813 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:01.882554 kubelet[3516]: E1105 23:45:01.882455 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:45:02.220489 kubelet[3516]: E1105 23:45:02.220305 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:45:02.234845 systemd-networkd[1825]: cali0168852603c: Gained IPv6LL Nov 5 23:45:02.320656 systemd-networkd[1825]: cali21a9f3dd1f4: Link UP Nov 5 23:45:02.323989 systemd-networkd[1825]: cali21a9f3dd1f4: Gained carrier Nov 5 23:45:02.397537 containerd[1978]: 2025-11-05 23:45:01.993 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0 calico-kube-controllers-877d4847d- calico-system a7fd47be-5341-4035-917c-acf91009ebea 872 0 2025-11-05 23:44:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:877d4847d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-188 calico-kube-controllers-877d4847d-6rhkp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali21a9f3dd1f4 [] [] }} ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-" Nov 5 23:45:02.397537 containerd[1978]: 2025-11-05 23:45:01.994 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" Nov 5 23:45:02.397537 containerd[1978]: 2025-11-05 23:45:02.143 [INFO][5025] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" HandleID="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Workload="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.143 [INFO][5025] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" HandleID="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Workload="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c11e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-188", "pod":"calico-kube-controllers-877d4847d-6rhkp", "timestamp":"2025-11-05 23:45:02.14367945 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.144 [INFO][5025] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.144 [INFO][5025] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.144 [INFO][5025] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.192 [INFO][5025] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" host="ip-172-31-26-188" Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.223 [INFO][5025] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.251 [INFO][5025] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.258 [INFO][5025] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:02.398821 containerd[1978]: 2025-11-05 23:45:02.271 [INFO][5025] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:02.399302 containerd[1978]: 2025-11-05 23:45:02.272 [INFO][5025] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" host="ip-172-31-26-188" Nov 5 23:45:02.399302 containerd[1978]: 2025-11-05 23:45:02.275 [INFO][5025] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf Nov 5 23:45:02.399302 containerd[1978]: 2025-11-05 23:45:02.284 [INFO][5025] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" host="ip-172-31-26-188" Nov 5 23:45:02.399302 containerd[1978]: 2025-11-05 23:45:02.297 [INFO][5025] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.67/26] block=192.168.34.64/26 handle="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" host="ip-172-31-26-188" Nov 5 23:45:02.399302 containerd[1978]: 2025-11-05 23:45:02.297 [INFO][5025] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.67/26] handle="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" host="ip-172-31-26-188" Nov 5 23:45:02.399302 containerd[1978]: 2025-11-05 23:45:02.297 [INFO][5025] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:02.399302 containerd[1978]: 2025-11-05 23:45:02.298 [INFO][5025] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.67/26] IPv6=[] ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" HandleID="k8s-pod-network.db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Workload="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" Nov 5 23:45:02.399675 containerd[1978]: 2025-11-05 23:45:02.306 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0", GenerateName:"calico-kube-controllers-877d4847d-", Namespace:"calico-system", SelfLink:"", UID:"a7fd47be-5341-4035-917c-acf91009ebea", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"877d4847d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"calico-kube-controllers-877d4847d-6rhkp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21a9f3dd1f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:02.399821 containerd[1978]: 2025-11-05 23:45:02.307 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.67/32] ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" Nov 5 23:45:02.399821 containerd[1978]: 2025-11-05 23:45:02.307 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21a9f3dd1f4 ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" Nov 5 23:45:02.399821 containerd[1978]: 2025-11-05 23:45:02.325 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" Nov 5 23:45:02.399979 containerd[1978]: 2025-11-05 23:45:02.327 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0", GenerateName:"calico-kube-controllers-877d4847d-", Namespace:"calico-system", SelfLink:"", UID:"a7fd47be-5341-4035-917c-acf91009ebea", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"877d4847d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf", Pod:"calico-kube-controllers-877d4847d-6rhkp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21a9f3dd1f4", MAC:"4e:98:6e:e7:b5:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:02.400102 containerd[1978]: 2025-11-05 23:45:02.393 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" Namespace="calico-system" Pod="calico-kube-controllers-877d4847d-6rhkp" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--kube--controllers--877d4847d--6rhkp-eth0" Nov 5 23:45:02.466007 containerd[1978]: time="2025-11-05T23:45:02.465875995Z" level=info msg="connecting to shim db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf" address="unix:///run/containerd/s/68e6e793c8b3f99ac712c16d66f957c53decc23a0c5d63449ffb4f4d5e3fa8ea" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:02.549517 systemd[1]: Started sshd@7-172.31.26.188:22-147.75.109.163:37118.service - OpenSSH per-connection server daemon (147.75.109.163:37118). Nov 5 23:45:02.606029 systemd[1]: Started cri-containerd-db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf.scope - libcontainer container db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf. Nov 5 23:45:02.695136 systemd-networkd[1825]: cali2a092cc5fd4: Link UP Nov 5 23:45:02.696320 systemd-networkd[1825]: cali2a092cc5fd4: Gained carrier Nov 5 23:45:02.752090 containerd[1978]: 2025-11-05 23:45:02.018 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0 coredns-674b8bbfcf- kube-system ff8dd25a-67c7-46ac-bb94-2c7271ca4123 870 0 2025-11-05 23:44:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-188 coredns-674b8bbfcf-48gh4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2a092cc5fd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-" Nov 5 23:45:02.752090 containerd[1978]: 2025-11-05 23:45:02.019 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" Nov 5 23:45:02.752090 containerd[1978]: 2025-11-05 23:45:02.149 [INFO][5030] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" HandleID="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Workload="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.149 [INFO][5030] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" HandleID="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Workload="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000367b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-188", "pod":"coredns-674b8bbfcf-48gh4", "timestamp":"2025-11-05 23:45:02.149019006 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.149 [INFO][5030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.297 [INFO][5030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.299 [INFO][5030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.389 [INFO][5030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" host="ip-172-31-26-188" Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.475 [INFO][5030] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.511 [INFO][5030] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.520 [INFO][5030] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:02.752783 containerd[1978]: 2025-11-05 23:45:02.535 [INFO][5030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:02.755152 containerd[1978]: 2025-11-05 23:45:02.535 [INFO][5030] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" host="ip-172-31-26-188" Nov 5 23:45:02.755152 containerd[1978]: 2025-11-05 23:45:02.558 [INFO][5030] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649 Nov 5 23:45:02.755152 containerd[1978]: 2025-11-05 23:45:02.575 [INFO][5030] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" host="ip-172-31-26-188" Nov 5 23:45:02.755152 containerd[1978]: 2025-11-05 23:45:02.605 [INFO][5030] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.68/26] block=192.168.34.64/26 handle="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" host="ip-172-31-26-188" Nov 5 23:45:02.755152 containerd[1978]: 2025-11-05 23:45:02.610 [INFO][5030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.68/26] handle="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" host="ip-172-31-26-188" Nov 5 23:45:02.755152 containerd[1978]: 2025-11-05 23:45:02.613 [INFO][5030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:02.755152 containerd[1978]: 2025-11-05 23:45:02.616 [INFO][5030] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.68/26] IPv6=[] ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" HandleID="k8s-pod-network.e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Workload="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" Nov 5 23:45:02.756785 containerd[1978]: 2025-11-05 23:45:02.630 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff8dd25a-67c7-46ac-bb94-2c7271ca4123", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"coredns-674b8bbfcf-48gh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a092cc5fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:02.756785 containerd[1978]: 2025-11-05 23:45:02.632 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.68/32] ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" Nov 5 23:45:02.756785 containerd[1978]: 2025-11-05 23:45:02.637 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a092cc5fd4 ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" Nov 5 23:45:02.756785 containerd[1978]: 2025-11-05 23:45:02.706 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" Nov 5 23:45:02.756785 containerd[1978]: 2025-11-05 23:45:02.707 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ff8dd25a-67c7-46ac-bb94-2c7271ca4123", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649", Pod:"coredns-674b8bbfcf-48gh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a092cc5fd4", MAC:"e6:0f:fa:fb:0a:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:02.756785 containerd[1978]: 2025-11-05 23:45:02.747 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" Namespace="kube-system" Pod="coredns-674b8bbfcf-48gh4" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--48gh4-eth0" Nov 5 23:45:02.847684 containerd[1978]: time="2025-11-05T23:45:02.847133157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-5cwjh,Uid:07a15442-dee2-4408-9286-ad45a221772c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:45:02.850559 sshd[5085]: Accepted publickey for core from 147.75.109.163 port 37118 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:02.856344 containerd[1978]: time="2025-11-05T23:45:02.855935325Z" level=info msg="connecting to shim e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649" address="unix:///run/containerd/s/4927b233f3a565d2188f77c060762f8682291db3eafd37d168f878e95239770a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:02.862049 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:02.899025 systemd-logind[1881]: New session 8 of user core. Nov 5 23:45:02.903883 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 23:45:02.916186 systemd-networkd[1825]: cali77a332e4ccb: Link UP Nov 5 23:45:02.923815 systemd-networkd[1825]: cali77a332e4ccb: Gained carrier Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.027 [INFO][5000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0 coredns-674b8bbfcf- kube-system 8f56a730-0864-408e-a1fd-84792cfa18c7 871 0 2025-11-05 23:44:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-188 coredns-674b8bbfcf-s8282 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali77a332e4ccb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.029 [INFO][5000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.177 [INFO][5036] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" HandleID="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Workload="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.179 [INFO][5036] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" HandleID="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Workload="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331400), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-188", "pod":"coredns-674b8bbfcf-s8282", "timestamp":"2025-11-05 23:45:02.177611814 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.180 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.616 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.619 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.688 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.727 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.762 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.770 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.781 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.781 [INFO][5036] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.785 [INFO][5036] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13 Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.807 [INFO][5036] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.832 [INFO][5036] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.69/26] block=192.168.34.64/26 handle="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.834 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.69/26] handle="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" host="ip-172-31-26-188" Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.835 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:03.013896 containerd[1978]: 2025-11-05 23:45:02.835 [INFO][5036] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.69/26] IPv6=[] ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" HandleID="k8s-pod-network.3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Workload="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" Nov 5 23:45:03.014985 containerd[1978]: 2025-11-05 23:45:02.856 [INFO][5000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f56a730-0864-408e-a1fd-84792cfa18c7", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"coredns-674b8bbfcf-s8282", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77a332e4ccb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:03.014985 containerd[1978]: 2025-11-05 23:45:02.863 [INFO][5000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.69/32] ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" Nov 5 23:45:03.014985 containerd[1978]: 2025-11-05 23:45:02.866 [INFO][5000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77a332e4ccb ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" Nov 5 23:45:03.014985 containerd[1978]: 2025-11-05 23:45:02.930 [INFO][5000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" Nov 5 23:45:03.014985 containerd[1978]: 2025-11-05 23:45:02.940 [INFO][5000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8f56a730-0864-408e-a1fd-84792cfa18c7", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13", Pod:"coredns-674b8bbfcf-s8282", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77a332e4ccb", MAC:"86:18:e5:8b:d9:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:03.014985 containerd[1978]: 2025-11-05 23:45:02.993 [INFO][5000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" Namespace="kube-system" Pod="coredns-674b8bbfcf-s8282" WorkloadEndpoint="ip--172--31--26--188-k8s-coredns--674b8bbfcf--s8282-eth0" Nov 5 23:45:03.042925 systemd[1]: Started cri-containerd-e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649.scope - libcontainer container e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649. Nov 5 23:45:03.217822 containerd[1978]: time="2025-11-05T23:45:03.216963739Z" level=info msg="connecting to shim 3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13" address="unix:///run/containerd/s/97529ba7e8de0e71cf4ae95af4db2516ce6b23de9916cb4141a67d3e71d2c049" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:03.273654 kubelet[3516]: E1105 23:45:03.272151 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:45:03.409371 systemd[1]: Started cri-containerd-3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13.scope - libcontainer container 3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13. Nov 5 23:45:03.545110 sshd[5133]: Connection closed by 147.75.109.163 port 37118 Nov 5 23:45:03.546874 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:03.565070 systemd[1]: sshd@7-172.31.26.188:22-147.75.109.163:37118.service: Deactivated successfully. Nov 5 23:45:03.577039 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 23:45:03.583347 systemd-logind[1881]: Session 8 logged out. Waiting for processes to exit. Nov 5 23:45:03.587543 systemd-logind[1881]: Removed session 8. Nov 5 23:45:03.627009 containerd[1978]: time="2025-11-05T23:45:03.626838729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877d4847d-6rhkp,Uid:a7fd47be-5341-4035-917c-acf91009ebea,Namespace:calico-system,Attempt:0,} returns sandbox id \"db764a3951e4950b2bc6dd0af1726f1058c2c0e433c557bb03a160116143fcbf\"" Nov 5 23:45:03.644233 containerd[1978]: time="2025-11-05T23:45:03.644160777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:45:03.646335 containerd[1978]: time="2025-11-05T23:45:03.645950397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-48gh4,Uid:ff8dd25a-67c7-46ac-bb94-2c7271ca4123,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649\"" Nov 5 23:45:03.673287 containerd[1978]: time="2025-11-05T23:45:03.673233069Z" level=info msg="CreateContainer within sandbox \"e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:45:03.728168 containerd[1978]: time="2025-11-05T23:45:03.727951642Z" level=info msg="Container 0b819f896d3705eb9def51fd993cdd0d70eabbd1952cbfd83a5d83ba3b730ca7: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:45:03.757457 containerd[1978]: time="2025-11-05T23:45:03.757384954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s8282,Uid:8f56a730-0864-408e-a1fd-84792cfa18c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13\"" Nov 5 23:45:03.760541 containerd[1978]: time="2025-11-05T23:45:03.760470166Z" level=info msg="CreateContainer within sandbox \"e9bac6b05dbb6324465e5e167c0789917b6dccc011a6521c7fc05872cf9d2649\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b819f896d3705eb9def51fd993cdd0d70eabbd1952cbfd83a5d83ba3b730ca7\"" Nov 5 23:45:03.764896 containerd[1978]: time="2025-11-05T23:45:03.764828590Z" level=info msg="StartContainer for \"0b819f896d3705eb9def51fd993cdd0d70eabbd1952cbfd83a5d83ba3b730ca7\"" Nov 5 23:45:03.781158 containerd[1978]: time="2025-11-05T23:45:03.781020358Z" level=info msg="connecting to shim 0b819f896d3705eb9def51fd993cdd0d70eabbd1952cbfd83a5d83ba3b730ca7" address="unix:///run/containerd/s/4927b233f3a565d2188f77c060762f8682291db3eafd37d168f878e95239770a" protocol=ttrpc version=3 Nov 5 23:45:03.785913 containerd[1978]: time="2025-11-05T23:45:03.785829982Z" level=info msg="CreateContainer within sandbox \"3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 23:45:03.832264 containerd[1978]: time="2025-11-05T23:45:03.831490318Z" level=info msg="Container 2bafab48070bd42fa761780380960a0ca460da80407be0f3f90dbf357704bee4: CDI devices from CRI Config.CDIDevices: []" Nov 5 23:45:03.834189 containerd[1978]: time="2025-11-05T23:45:03.834053026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v2m4t,Uid:03f18444-9872-4d73-bb60-c66c73cdfaff,Namespace:calico-system,Attempt:0,}" Nov 5 23:45:03.839424 containerd[1978]: time="2025-11-05T23:45:03.839059798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-zzvbq,Uid:76aeec5c-7e05-490c-a0a0-b95d9945b382,Namespace:calico-apiserver,Attempt:0,}" Nov 5 23:45:03.869053 systemd[1]: Started cri-containerd-0b819f896d3705eb9def51fd993cdd0d70eabbd1952cbfd83a5d83ba3b730ca7.scope - libcontainer container 0b819f896d3705eb9def51fd993cdd0d70eabbd1952cbfd83a5d83ba3b730ca7. Nov 5 23:45:03.882849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639381352.mount: Deactivated successfully. Nov 5 23:45:03.906619 containerd[1978]: time="2025-11-05T23:45:03.905992054Z" level=info msg="CreateContainer within sandbox \"3e2f7a77ac118ca5c906ce54c6c6562850c59159503b0d9ae0051e638acaab13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bafab48070bd42fa761780380960a0ca460da80407be0f3f90dbf357704bee4\"" Nov 5 23:45:03.916616 containerd[1978]: time="2025-11-05T23:45:03.915205019Z" level=info msg="StartContainer for \"2bafab48070bd42fa761780380960a0ca460da80407be0f3f90dbf357704bee4\"" Nov 5 23:45:03.924657 containerd[1978]: time="2025-11-05T23:45:03.924528731Z" level=info msg="connecting to shim 2bafab48070bd42fa761780380960a0ca460da80407be0f3f90dbf357704bee4" address="unix:///run/containerd/s/97529ba7e8de0e71cf4ae95af4db2516ce6b23de9916cb4141a67d3e71d2c049" protocol=ttrpc version=3 Nov 5 23:45:03.970642 containerd[1978]: time="2025-11-05T23:45:03.970550051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:03.973274 containerd[1978]: time="2025-11-05T23:45:03.973206935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:45:03.974181 kubelet[3516]: E1105 23:45:03.973977 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:03.974540 kubelet[3516]: E1105 23:45:03.974405 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:03.975959 containerd[1978]: time="2025-11-05T23:45:03.974786579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:03.976092 kubelet[3516]: E1105 23:45:03.975153 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qb8qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-877d4847d-6rhkp_calico-system(a7fd47be-5341-4035-917c-acf91009ebea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:03.977616 kubelet[3516]: E1105 23:45:03.977492 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:45:04.031709 systemd-networkd[1825]: calib6fed92f1ee: Link UP Nov 5 23:45:04.038199 systemd-networkd[1825]: calib6fed92f1ee: Gained carrier Nov 5 23:45:04.092715 systemd-networkd[1825]: cali21a9f3dd1f4: Gained IPv6LL Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.374 [INFO][5120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0 calico-apiserver-759f658d45- calico-apiserver 07a15442-dee2-4408-9286-ad45a221772c 874 0 2025-11-05 23:44:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:759f658d45 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-188 calico-apiserver-759f658d45-5cwjh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6fed92f1ee [] [] }} ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.375 [INFO][5120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.704 [INFO][5214] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" HandleID="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Workload="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.705 [INFO][5214] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" HandleID="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Workload="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001215b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-188", "pod":"calico-apiserver-759f658d45-5cwjh", "timestamp":"2025-11-05 23:45:03.704189193 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.705 [INFO][5214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.705 [INFO][5214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.705 [INFO][5214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.753 [INFO][5214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.787 [INFO][5214] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.810 [INFO][5214] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.822 [INFO][5214] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.827 [INFO][5214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.828 [INFO][5214] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.836 [INFO][5214] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3 Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.882 [INFO][5214] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.945 [INFO][5214] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.70/26] block=192.168.34.64/26 handle="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.948 [INFO][5214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.70/26] handle="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" host="ip-172-31-26-188" Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.952 [INFO][5214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:04.158532 containerd[1978]: 2025-11-05 23:45:03.952 [INFO][5214] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.70/26] IPv6=[] ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" HandleID="k8s-pod-network.305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Workload="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" Nov 5 23:45:04.160679 containerd[1978]: 2025-11-05 23:45:03.999 [INFO][5120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0", GenerateName:"calico-apiserver-759f658d45-", Namespace:"calico-apiserver", SelfLink:"", UID:"07a15442-dee2-4408-9286-ad45a221772c", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759f658d45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"calico-apiserver-759f658d45-5cwjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6fed92f1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:04.160679 containerd[1978]: 2025-11-05 23:45:04.003 [INFO][5120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.70/32] ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" Nov 5 23:45:04.160679 containerd[1978]: 2025-11-05 23:45:04.004 [INFO][5120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6fed92f1ee ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" Nov 5 23:45:04.160679 containerd[1978]: 2025-11-05 23:45:04.044 [INFO][5120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" Nov 5 23:45:04.160679 containerd[1978]: 2025-11-05 23:45:04.060 [INFO][5120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0", GenerateName:"calico-apiserver-759f658d45-", Namespace:"calico-apiserver", SelfLink:"", UID:"07a15442-dee2-4408-9286-ad45a221772c", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759f658d45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3", Pod:"calico-apiserver-759f658d45-5cwjh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6fed92f1ee", MAC:"1a:36:68:a8:f1:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:04.160679 containerd[1978]: 2025-11-05 23:45:04.118 [INFO][5120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-5cwjh" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--5cwjh-eth0" Nov 5 23:45:04.168173 containerd[1978]: time="2025-11-05T23:45:04.167877008Z" level=info msg="StartContainer for \"0b819f896d3705eb9def51fd993cdd0d70eabbd1952cbfd83a5d83ba3b730ca7\" returns successfully" Nov 5 23:45:04.190947 systemd[1]: Started cri-containerd-2bafab48070bd42fa761780380960a0ca460da80407be0f3f90dbf357704bee4.scope - libcontainer container 2bafab48070bd42fa761780380960a0ca460da80407be0f3f90dbf357704bee4. Nov 5 23:45:04.218933 systemd-networkd[1825]: cali2a092cc5fd4: Gained IPv6LL Nov 5 23:45:04.272981 kubelet[3516]: E1105 23:45:04.272893 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:45:04.300848 containerd[1978]: time="2025-11-05T23:45:04.300546344Z" level=info msg="connecting to shim 305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3" address="unix:///run/containerd/s/459698e0f53ed5d91ab05f299c796522180de260ad55f76f351e5cdd7468fe85" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:04.447468 kubelet[3516]: I1105 23:45:04.447270 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-48gh4" podStartSLOduration=51.447247677 podStartE2EDuration="51.447247677s" podCreationTimestamp="2025-11-05 23:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:45:04.402883401 +0000 UTC m=+55.949432091" watchObservedRunningTime="2025-11-05 23:45:04.447247677 +0000 UTC m=+55.993796367" Nov 5 23:45:04.498953 systemd[1]: Started cri-containerd-305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3.scope - libcontainer container 305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3. Nov 5 23:45:04.522176 containerd[1978]: time="2025-11-05T23:45:04.522106930Z" level=info msg="StartContainer for \"2bafab48070bd42fa761780380960a0ca460da80407be0f3f90dbf357704bee4\" returns successfully" Nov 5 23:45:04.782471 systemd-networkd[1825]: cali8a73b8648de: Link UP Nov 5 23:45:04.783924 systemd-networkd[1825]: cali8a73b8648de: Gained carrier Nov 5 23:45:04.796165 systemd-networkd[1825]: cali77a332e4ccb: Gained IPv6LL Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.360 [INFO][5261] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0 goldmane-666569f655- calico-system 03f18444-9872-4d73-bb60-c66c73cdfaff 876 0 2025-11-05 23:44:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-188 goldmane-666569f655-v2m4t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8a73b8648de [] [] }} ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.360 [INFO][5261] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.608 [INFO][5366] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" HandleID="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Workload="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.609 [INFO][5366] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" HandleID="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Workload="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103910), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-188", "pod":"goldmane-666569f655-v2m4t", "timestamp":"2025-11-05 23:45:04.608211814 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.609 [INFO][5366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.610 [INFO][5366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.610 [INFO][5366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.635 [INFO][5366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.646 [INFO][5366] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.679 [INFO][5366] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.685 [INFO][5366] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.696 [INFO][5366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.696 [INFO][5366] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.733 [INFO][5366] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.751 [INFO][5366] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.768 [INFO][5366] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.71/26] block=192.168.34.64/26 handle="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.768 [INFO][5366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.71/26] handle="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" host="ip-172-31-26-188" Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.768 [INFO][5366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:04.843436 containerd[1978]: 2025-11-05 23:45:04.769 [INFO][5366] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.71/26] IPv6=[] ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" HandleID="k8s-pod-network.e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Workload="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" Nov 5 23:45:04.847317 containerd[1978]: 2025-11-05 23:45:04.776 [INFO][5261] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"03f18444-9872-4d73-bb60-c66c73cdfaff", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"goldmane-666569f655-v2m4t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a73b8648de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:04.847317 containerd[1978]: 2025-11-05 23:45:04.776 [INFO][5261] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.71/32] ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" Nov 5 23:45:04.847317 containerd[1978]: 2025-11-05 23:45:04.776 [INFO][5261] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a73b8648de ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" Nov 5 23:45:04.847317 containerd[1978]: 2025-11-05 23:45:04.782 [INFO][5261] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" Nov 5 23:45:04.847317 containerd[1978]: 2025-11-05 23:45:04.784 [INFO][5261] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"03f18444-9872-4d73-bb60-c66c73cdfaff", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b", Pod:"goldmane-666569f655-v2m4t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a73b8648de", MAC:"ae:18:90:a0:da:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:04.847317 containerd[1978]: 2025-11-05 23:45:04.834 [INFO][5261] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" Namespace="calico-system" Pod="goldmane-666569f655-v2m4t" WorkloadEndpoint="ip--172--31--26--188-k8s-goldmane--666569f655--v2m4t-eth0" Nov 5 23:45:04.948168 containerd[1978]: time="2025-11-05T23:45:04.948061728Z" level=info msg="connecting to shim e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b" address="unix:///run/containerd/s/d3973fe079811d4ba7be3884515540b498d19e123beb07e72b6e9f9589dd3cfa" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:05.000797 systemd-networkd[1825]: calie26bc76b9b5: Link UP Nov 5 23:45:05.006941 systemd-networkd[1825]: calie26bc76b9b5: Gained carrier Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.415 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0 calico-apiserver-759f658d45- calico-apiserver 76aeec5c-7e05-490c-a0a0-b95d9945b382 875 0 2025-11-05 23:44:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:759f658d45 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-188 calico-apiserver-759f658d45-zzvbq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie26bc76b9b5 [] [] }} ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.423 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.609 [INFO][5380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" HandleID="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Workload="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.610 [INFO][5380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" HandleID="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Workload="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-188", "pod":"calico-apiserver-759f658d45-zzvbq", "timestamp":"2025-11-05 23:45:04.609270814 +0000 UTC"}, Hostname:"ip-172-31-26-188", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.610 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.769 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.769 [INFO][5380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-188' Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.811 [INFO][5380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.846 [INFO][5380] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.868 [INFO][5380] ipam/ipam.go 511: Trying affinity for 192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.877 [INFO][5380] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.888 [INFO][5380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.889 [INFO][5380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.895 [INFO][5380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988 Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.928 [INFO][5380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.962 [INFO][5380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.72/26] block=192.168.34.64/26 handle="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.962 [INFO][5380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.72/26] handle="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" host="ip-172-31-26-188" Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.964 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 23:45:05.056020 containerd[1978]: 2025-11-05 23:45:04.964 [INFO][5380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.72/26] IPv6=[] ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" HandleID="k8s-pod-network.e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Workload="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" Nov 5 23:45:05.058688 containerd[1978]: 2025-11-05 23:45:04.990 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0", GenerateName:"calico-apiserver-759f658d45-", Namespace:"calico-apiserver", SelfLink:"", UID:"76aeec5c-7e05-490c-a0a0-b95d9945b382", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759f658d45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"", Pod:"calico-apiserver-759f658d45-zzvbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie26bc76b9b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:05.058688 containerd[1978]: 2025-11-05 23:45:04.991 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.72/32] ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" Nov 5 23:45:05.058688 containerd[1978]: 2025-11-05 23:45:04.992 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie26bc76b9b5 ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" Nov 5 23:45:05.058688 containerd[1978]: 2025-11-05 23:45:04.999 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" Nov 5 23:45:05.058688 containerd[1978]: 2025-11-05 23:45:05.003 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0", GenerateName:"calico-apiserver-759f658d45-", Namespace:"calico-apiserver", SelfLink:"", UID:"76aeec5c-7e05-490c-a0a0-b95d9945b382", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 23, 44, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759f658d45", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-188", ContainerID:"e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988", Pod:"calico-apiserver-759f658d45-zzvbq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie26bc76b9b5", MAC:"8e:81:c8:ab:f9:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 23:45:05.058688 containerd[1978]: 2025-11-05 23:45:05.048 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" Namespace="calico-apiserver" Pod="calico-apiserver-759f658d45-zzvbq" WorkloadEndpoint="ip--172--31--26--188-k8s-calico--apiserver--759f658d45--zzvbq-eth0" Nov 5 23:45:05.096191 systemd[1]: Started cri-containerd-e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b.scope - libcontainer container e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b. Nov 5 23:45:05.148152 containerd[1978]: time="2025-11-05T23:45:05.147762621Z" level=info msg="connecting to shim e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988" address="unix:///run/containerd/s/2aea63e228f84d5b7bd4dbe8cd61ea516f74af5a0ebb204e929f62488ee44a74" namespace=k8s.io protocol=ttrpc version=3 Nov 5 23:45:05.226029 systemd[1]: Started cri-containerd-e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988.scope - libcontainer container e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988. Nov 5 23:45:05.301346 kubelet[3516]: E1105 23:45:05.301252 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:45:05.364644 containerd[1978]: time="2025-11-05T23:45:05.364300858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-5cwjh,Uid:07a15442-dee2-4408-9286-ad45a221772c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"305b5fe4a120b9c7b650ca2725f2885c08912724fd1c78cfa4adad9ab05b63c3\"" Nov 5 23:45:05.372072 containerd[1978]: time="2025-11-05T23:45:05.371994118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:05.441613 kubelet[3516]: I1105 23:45:05.441395 3516 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s8282" podStartSLOduration=52.441370846 podStartE2EDuration="52.441370846s" podCreationTimestamp="2025-11-05 23:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 23:45:05.396525178 +0000 UTC m=+56.943073856" watchObservedRunningTime="2025-11-05 23:45:05.441370846 +0000 UTC m=+56.987919536" Nov 5 23:45:05.606618 containerd[1978]: time="2025-11-05T23:45:05.605385647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-v2m4t,Uid:03f18444-9872-4d73-bb60-c66c73cdfaff,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0c3a1c36d0980e07e4143b627e4d19ea11a33e6d5044eab50fa308088d1394b\"" Nov 5 23:45:05.652317 containerd[1978]: time="2025-11-05T23:45:05.651622355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:05.654265 containerd[1978]: time="2025-11-05T23:45:05.654188819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:05.654460 containerd[1978]: time="2025-11-05T23:45:05.654329363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:05.655531 kubelet[3516]: E1105 23:45:05.655040 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:05.655531 kubelet[3516]: E1105 23:45:05.655113 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:05.656495 kubelet[3516]: E1105 23:45:05.656087 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-759f658d45-5cwjh_calico-apiserver(07a15442-dee2-4408-9286-ad45a221772c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:05.657838 kubelet[3516]: E1105 23:45:05.657792 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:45:05.657978 containerd[1978]: time="2025-11-05T23:45:05.657546299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:45:05.676800 containerd[1978]: time="2025-11-05T23:45:05.676713611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759f658d45-zzvbq,Uid:76aeec5c-7e05-490c-a0a0-b95d9945b382,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e87cc608acedcd8043dfff0c72e4750457c2f57bb6464442ba41712cdc9e6988\"" Nov 5 23:45:05.755038 systemd-networkd[1825]: calib6fed92f1ee: Gained IPv6LL Nov 5 23:45:05.935369 containerd[1978]: time="2025-11-05T23:45:05.935044861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:05.937483 containerd[1978]: time="2025-11-05T23:45:05.937297561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:45:05.937483 containerd[1978]: time="2025-11-05T23:45:05.937429645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:05.939834 kubelet[3516]: E1105 23:45:05.938415 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:05.939834 kubelet[3516]: E1105 23:45:05.938476 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:05.939834 kubelet[3516]: E1105 23:45:05.939120 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx6pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v2m4t_calico-system(03f18444-9872-4d73-bb60-c66c73cdfaff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:05.940541 containerd[1978]: time="2025-11-05T23:45:05.938923933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:05.941779 kubelet[3516]: E1105 23:45:05.940722 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:45:06.289997 containerd[1978]: time="2025-11-05T23:45:06.289801930Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:06.292247 containerd[1978]: time="2025-11-05T23:45:06.292099714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:06.292247 containerd[1978]: time="2025-11-05T23:45:06.292168918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:06.292464 kubelet[3516]: E1105 23:45:06.292383 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:06.292464 kubelet[3516]: E1105 23:45:06.292439 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:06.293186 kubelet[3516]: E1105 23:45:06.292724 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2gwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-759f658d45-zzvbq_calico-apiserver(76aeec5c-7e05-490c-a0a0-b95d9945b382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:06.294273 kubelet[3516]: E1105 23:45:06.294001 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:45:06.307488 kubelet[3516]: E1105 23:45:06.307200 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:45:06.314130 kubelet[3516]: E1105 23:45:06.313770 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:45:06.322287 kubelet[3516]: E1105 23:45:06.322178 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:45:06.522914 systemd-networkd[1825]: cali8a73b8648de: Gained IPv6LL Nov 5 23:45:06.780047 systemd-networkd[1825]: calie26bc76b9b5: Gained IPv6LL Nov 5 23:45:07.325634 kubelet[3516]: E1105 23:45:07.322662 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:45:07.325634 kubelet[3516]: E1105 23:45:07.323703 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:45:07.325634 kubelet[3516]: E1105 23:45:07.324857 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:45:08.584914 systemd[1]: Started sshd@8-172.31.26.188:22-147.75.109.163:37120.service - OpenSSH per-connection server daemon (147.75.109.163:37120). Nov 5 23:45:08.799085 sshd[5544]: Accepted publickey for core from 147.75.109.163 port 37120 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:08.805247 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:08.819858 systemd-logind[1881]: New session 9 of user core. Nov 5 23:45:08.829910 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 23:45:08.891499 ntpd[2103]: Listen normally on 6 vxlan.calico 192.168.34.64:123 Nov 5 23:45:08.892270 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 6 vxlan.calico 192.168.34.64:123 Nov 5 23:45:08.892270 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 7 calia7a022cecef [fe80::ecee:eeff:feee:eeee%4]:123 Nov 5 23:45:08.892270 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 8 vxlan.calico [fe80::641c:57ff:fe89:b0dc%5]:123 Nov 5 23:45:08.892270 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 9 cali0168852603c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 23:45:08.892270 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 10 cali21a9f3dd1f4 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 23:45:08.892270 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 11 cali2a092cc5fd4 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 23:45:08.892270 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 12 cali77a332e4ccb [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 23:45:08.891662 ntpd[2103]: Listen normally on 7 calia7a022cecef [fe80::ecee:eeff:feee:eeee%4]:123 Nov 5 23:45:08.893001 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 13 calib6fed92f1ee [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 23:45:08.893001 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 14 cali8a73b8648de [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 23:45:08.893001 ntpd[2103]: 5 Nov 23:45:08 ntpd[2103]: Listen normally on 15 calie26bc76b9b5 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 23:45:08.891720 ntpd[2103]: Listen normally on 8 vxlan.calico [fe80::641c:57ff:fe89:b0dc%5]:123 Nov 5 23:45:08.891769 ntpd[2103]: Listen normally on 9 cali0168852603c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 23:45:08.891838 ntpd[2103]: Listen normally on 10 cali21a9f3dd1f4 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 23:45:08.892185 ntpd[2103]: Listen normally on 11 cali2a092cc5fd4 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 23:45:08.892256 ntpd[2103]: Listen normally on 12 cali77a332e4ccb [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 23:45:08.892309 ntpd[2103]: Listen normally on 13 calib6fed92f1ee [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 23:45:08.892354 ntpd[2103]: Listen normally on 14 cali8a73b8648de [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 23:45:08.892400 ntpd[2103]: Listen normally on 15 calie26bc76b9b5 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 23:45:09.168803 sshd[5547]: Connection closed by 147.75.109.163 port 37120 Nov 5 23:45:09.171129 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:09.185541 systemd[1]: sshd@8-172.31.26.188:22-147.75.109.163:37120.service: Deactivated successfully. Nov 5 23:45:09.193848 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 23:45:09.198836 systemd-logind[1881]: Session 9 logged out. Waiting for processes to exit. Nov 5 23:45:09.204304 systemd-logind[1881]: Removed session 9. Nov 5 23:45:09.835582 containerd[1978]: time="2025-11-05T23:45:09.834824260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:45:10.134784 containerd[1978]: time="2025-11-05T23:45:10.131423449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:10.135916 containerd[1978]: time="2025-11-05T23:45:10.135779113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:45:10.136325 containerd[1978]: time="2025-11-05T23:45:10.135893497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:45:10.136796 kubelet[3516]: E1105 23:45:10.136607 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:10.136796 kubelet[3516]: E1105 23:45:10.136696 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:10.137883 kubelet[3516]: E1105 23:45:10.136910 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:07d3aeb01bc54febada3000b29641fd0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqpnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c996d876-qnn6p_calico-system(20efcd41-054c-4821-9d54-ac97d532abc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:10.141998 containerd[1978]: time="2025-11-05T23:45:10.141948829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:45:10.426678 containerd[1978]: time="2025-11-05T23:45:10.426319335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:10.428764 containerd[1978]: time="2025-11-05T23:45:10.428579667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:45:10.428764 containerd[1978]: time="2025-11-05T23:45:10.428641851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:10.429060 kubelet[3516]: E1105 23:45:10.428979 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:10.429199 kubelet[3516]: E1105 23:45:10.429070 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:10.429651 kubelet[3516]: E1105 23:45:10.429360 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqpnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c996d876-qnn6p_calico-system(20efcd41-054c-4821-9d54-ac97d532abc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:10.431116 kubelet[3516]: E1105 23:45:10.431053 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:45:14.213752 systemd[1]: Started sshd@9-172.31.26.188:22-147.75.109.163:42106.service - OpenSSH per-connection server daemon (147.75.109.163:42106). Nov 5 23:45:14.414374 sshd[5570]: Accepted publickey for core from 147.75.109.163 port 42106 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:14.416929 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:14.425741 systemd-logind[1881]: New session 10 of user core. Nov 5 23:45:14.431867 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 23:45:14.689862 sshd[5575]: Connection closed by 147.75.109.163 port 42106 Nov 5 23:45:14.690721 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:14.699030 systemd[1]: sshd@9-172.31.26.188:22-147.75.109.163:42106.service: Deactivated successfully. Nov 5 23:45:14.703581 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 23:45:14.708993 systemd-logind[1881]: Session 10 logged out. Waiting for processes to exit. Nov 5 23:45:14.727051 systemd[1]: Started sshd@10-172.31.26.188:22-147.75.109.163:42116.service - OpenSSH per-connection server daemon (147.75.109.163:42116). Nov 5 23:45:14.730127 systemd-logind[1881]: Removed session 10. Nov 5 23:45:14.923902 sshd[5588]: Accepted publickey for core from 147.75.109.163 port 42116 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:14.926414 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:14.935551 systemd-logind[1881]: New session 11 of user core. Nov 5 23:45:14.943892 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 23:45:15.279247 sshd[5591]: Connection closed by 147.75.109.163 port 42116 Nov 5 23:45:15.283000 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:15.292209 systemd[1]: sshd@10-172.31.26.188:22-147.75.109.163:42116.service: Deactivated successfully. Nov 5 23:45:15.299361 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 23:45:15.304342 systemd-logind[1881]: Session 11 logged out. Waiting for processes to exit. Nov 5 23:45:15.329306 systemd[1]: Started sshd@11-172.31.26.188:22-147.75.109.163:42130.service - OpenSSH per-connection server daemon (147.75.109.163:42130). Nov 5 23:45:15.340877 systemd-logind[1881]: Removed session 11. Nov 5 23:45:15.533708 sshd[5602]: Accepted publickey for core from 147.75.109.163 port 42130 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:15.536225 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:15.547203 systemd-logind[1881]: New session 12 of user core. Nov 5 23:45:15.555910 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 23:45:15.834027 sshd[5605]: Connection closed by 147.75.109.163 port 42130 Nov 5 23:45:15.834634 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:15.844475 systemd[1]: sshd@11-172.31.26.188:22-147.75.109.163:42130.service: Deactivated successfully. Nov 5 23:45:15.845521 systemd-logind[1881]: Session 12 logged out. Waiting for processes to exit. Nov 5 23:45:15.848988 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 23:45:15.857663 systemd-logind[1881]: Removed session 12. Nov 5 23:45:16.839666 containerd[1978]: time="2025-11-05T23:45:16.839318927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:45:17.144148 containerd[1978]: time="2025-11-05T23:45:17.143965784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:17.146366 containerd[1978]: time="2025-11-05T23:45:17.146253320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:45:17.146366 containerd[1978]: time="2025-11-05T23:45:17.146325020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:17.147189 kubelet[3516]: E1105 23:45:17.146539 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:17.147189 kubelet[3516]: E1105 23:45:17.146630 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:17.147189 kubelet[3516]: E1105 23:45:17.146846 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qb8qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-877d4847d-6rhkp_calico-system(a7fd47be-5341-4035-917c-acf91009ebea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:17.148315 kubelet[3516]: E1105 23:45:17.148088 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:45:17.835961 containerd[1978]: time="2025-11-05T23:45:17.835880712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:45:18.097537 containerd[1978]: time="2025-11-05T23:45:18.097359813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:18.099763 containerd[1978]: time="2025-11-05T23:45:18.099675237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:45:18.099971 containerd[1978]: time="2025-11-05T23:45:18.099827265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:18.100522 kubelet[3516]: E1105 23:45:18.100379 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:18.100522 kubelet[3516]: E1105 23:45:18.100472 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:18.100932 kubelet[3516]: E1105 23:45:18.100794 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx6pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v2m4t_calico-system(03f18444-9872-4d73-bb60-c66c73cdfaff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:18.102135 kubelet[3516]: E1105 23:45:18.102046 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:45:18.834387 containerd[1978]: time="2025-11-05T23:45:18.834114313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:45:19.116707 containerd[1978]: time="2025-11-05T23:45:19.116491306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:19.119477 containerd[1978]: time="2025-11-05T23:45:19.119294638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:45:19.119477 containerd[1978]: time="2025-11-05T23:45:19.119422522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:45:19.119887 kubelet[3516]: E1105 23:45:19.119794 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:19.120414 kubelet[3516]: E1105 23:45:19.119886 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:19.120414 kubelet[3516]: E1105 23:45:19.120147 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:19.125334 containerd[1978]: time="2025-11-05T23:45:19.125206366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:45:19.388656 containerd[1978]: time="2025-11-05T23:45:19.388426151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:19.390783 containerd[1978]: time="2025-11-05T23:45:19.390716615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:45:19.390945 containerd[1978]: time="2025-11-05T23:45:19.390859031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:45:19.391222 kubelet[3516]: E1105 23:45:19.391167 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:19.391313 kubelet[3516]: E1105 23:45:19.391234 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:19.391622 kubelet[3516]: E1105 23:45:19.391434 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:19.393093 kubelet[3516]: E1105 23:45:19.393006 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:45:20.873783 systemd[1]: Started sshd@12-172.31.26.188:22-147.75.109.163:38102.service - OpenSSH per-connection server daemon (147.75.109.163:38102). Nov 5 23:45:21.083267 sshd[5624]: Accepted publickey for core from 147.75.109.163 port 38102 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:21.085628 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:21.093806 systemd-logind[1881]: New session 13 of user core. Nov 5 23:45:21.101956 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 23:45:21.352012 sshd[5627]: Connection closed by 147.75.109.163 port 38102 Nov 5 23:45:21.353365 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:21.360211 systemd[1]: sshd@12-172.31.26.188:22-147.75.109.163:38102.service: Deactivated successfully. Nov 5 23:45:21.365926 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 23:45:21.374468 systemd-logind[1881]: Session 13 logged out. Waiting for processes to exit. Nov 5 23:45:21.376966 systemd-logind[1881]: Removed session 13. Nov 5 23:45:22.837034 containerd[1978]: time="2025-11-05T23:45:22.836511305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:23.151378 containerd[1978]: time="2025-11-05T23:45:23.151175966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:23.153822 containerd[1978]: time="2025-11-05T23:45:23.153729458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:23.153822 containerd[1978]: time="2025-11-05T23:45:23.153783818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:23.154547 kubelet[3516]: E1105 23:45:23.154444 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:23.154547 kubelet[3516]: E1105 23:45:23.154522 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:23.157542 containerd[1978]: time="2025-11-05T23:45:23.157476902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:23.157730 kubelet[3516]: E1105 23:45:23.157138 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2gwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-759f658d45-zzvbq_calico-apiserver(76aeec5c-7e05-490c-a0a0-b95d9945b382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:23.158901 kubelet[3516]: E1105 23:45:23.158818 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:45:23.414423 containerd[1978]: time="2025-11-05T23:45:23.414133431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:23.416439 containerd[1978]: time="2025-11-05T23:45:23.416288979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:23.416439 containerd[1978]: time="2025-11-05T23:45:23.416356227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:23.416927 kubelet[3516]: E1105 23:45:23.416872 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:23.417063 kubelet[3516]: E1105 23:45:23.416938 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:23.417770 kubelet[3516]: E1105 23:45:23.417150 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-759f658d45-5cwjh_calico-apiserver(07a15442-dee2-4408-9286-ad45a221772c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:23.418528 kubelet[3516]: E1105 23:45:23.418386 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:45:24.836617 kubelet[3516]: E1105 23:45:24.835150 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:45:26.391924 systemd[1]: Started sshd@13-172.31.26.188:22-147.75.109.163:38104.service - OpenSSH per-connection server daemon (147.75.109.163:38104). Nov 5 23:45:26.585771 sshd[5645]: Accepted publickey for core from 147.75.109.163 port 38104 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:26.588309 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:26.597694 systemd-logind[1881]: New session 14 of user core. Nov 5 23:45:26.604910 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 23:45:26.877148 sshd[5648]: Connection closed by 147.75.109.163 port 38104 Nov 5 23:45:26.877522 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:26.886143 systemd[1]: sshd@13-172.31.26.188:22-147.75.109.163:38104.service: Deactivated successfully. Nov 5 23:45:26.892946 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 23:45:26.895568 systemd-logind[1881]: Session 14 logged out. Waiting for processes to exit. Nov 5 23:45:26.899556 systemd-logind[1881]: Removed session 14. Nov 5 23:45:27.834097 kubelet[3516]: E1105 23:45:27.833711 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:45:29.243514 containerd[1978]: time="2025-11-05T23:45:29.243318464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3\" id:\"f6f50cf3249bf9de374eef312cc1d8a22ebbcc11b387ff7abf6000d75eed7692\" pid:5673 exited_at:{seconds:1762386329 nanos:241816052}" Nov 5 23:45:29.837833 kubelet[3516]: E1105 23:45:29.837459 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:45:31.924976 systemd[1]: Started sshd@14-172.31.26.188:22-147.75.109.163:52348.service - OpenSSH per-connection server daemon (147.75.109.163:52348). Nov 5 23:45:32.124724 sshd[5688]: Accepted publickey for core from 147.75.109.163 port 52348 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:32.127505 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:32.136398 systemd-logind[1881]: New session 15 of user core. Nov 5 23:45:32.144924 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 23:45:32.401582 sshd[5691]: Connection closed by 147.75.109.163 port 52348 Nov 5 23:45:32.402122 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:32.411301 systemd[1]: sshd@14-172.31.26.188:22-147.75.109.163:52348.service: Deactivated successfully. Nov 5 23:45:32.416986 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 23:45:32.420207 systemd-logind[1881]: Session 15 logged out. Waiting for processes to exit. Nov 5 23:45:32.423138 systemd-logind[1881]: Removed session 15. Nov 5 23:45:32.836276 kubelet[3516]: E1105 23:45:32.835554 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.116705 1882 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.116778 1882 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.117231 1882 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118073 1882 omaha_request_params.cc:62] Current group set to beta Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118245 1882 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118269 1882 update_attempter.cc:643] Scheduling an action processor start. Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118312 1882 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118378 1882 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118496 1882 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118516 1882 omaha_request_action.cc:272] Request: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: Nov 5 23:45:34.119621 update_engine[1882]: I20251105 23:45:34.118532 1882 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 23:45:34.126918 locksmithd[1941]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 5 23:45:34.131896 update_engine[1882]: I20251105 23:45:34.129735 1882 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 23:45:34.133270 update_engine[1882]: I20251105 23:45:34.132740 1882 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 23:45:34.141635 update_engine[1882]: E20251105 23:45:34.141140 1882 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 5 23:45:34.141975 update_engine[1882]: I20251105 23:45:34.141923 1882 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 5 23:45:35.837027 kubelet[3516]: E1105 23:45:35.836407 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:45:36.841301 containerd[1978]: time="2025-11-05T23:45:36.839854854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 23:45:37.108935 containerd[1978]: time="2025-11-05T23:45:37.105560271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:37.109767 containerd[1978]: time="2025-11-05T23:45:37.109634247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 23:45:37.109767 containerd[1978]: time="2025-11-05T23:45:37.109718295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 23:45:37.110230 kubelet[3516]: E1105 23:45:37.110161 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:37.111128 kubelet[3516]: E1105 23:45:37.110228 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 23:45:37.111128 kubelet[3516]: E1105 23:45:37.110417 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:07d3aeb01bc54febada3000b29641fd0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqpnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c996d876-qnn6p_calico-system(20efcd41-054c-4821-9d54-ac97d532abc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:37.114977 containerd[1978]: time="2025-11-05T23:45:37.114844863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 23:45:37.376117 containerd[1978]: time="2025-11-05T23:45:37.375155477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:37.379302 containerd[1978]: time="2025-11-05T23:45:37.379082165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 23:45:37.379302 containerd[1978]: time="2025-11-05T23:45:37.379237709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:37.379608 kubelet[3516]: E1105 23:45:37.379517 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:37.379714 kubelet[3516]: E1105 23:45:37.379621 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 23:45:37.380628 kubelet[3516]: E1105 23:45:37.379878 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqpnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c996d876-qnn6p_calico-system(20efcd41-054c-4821-9d54-ac97d532abc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:37.381278 kubelet[3516]: E1105 23:45:37.381161 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:45:37.449043 systemd[1]: Started sshd@15-172.31.26.188:22-147.75.109.163:52350.service - OpenSSH per-connection server daemon (147.75.109.163:52350). Nov 5 23:45:37.658058 sshd[5705]: Accepted publickey for core from 147.75.109.163 port 52350 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:37.660987 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:37.671021 systemd-logind[1881]: New session 16 of user core. Nov 5 23:45:37.680881 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 23:45:37.834170 kubelet[3516]: E1105 23:45:37.834103 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:45:38.089252 sshd[5708]: Connection closed by 147.75.109.163 port 52350 Nov 5 23:45:38.089871 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:38.102165 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 23:45:38.104474 systemd[1]: sshd@15-172.31.26.188:22-147.75.109.163:52350.service: Deactivated successfully. Nov 5 23:45:38.116485 systemd-logind[1881]: Session 16 logged out. Waiting for processes to exit. Nov 5 23:45:38.140043 systemd[1]: Started sshd@16-172.31.26.188:22-147.75.109.163:52360.service - OpenSSH per-connection server daemon (147.75.109.163:52360). Nov 5 23:45:38.142859 systemd-logind[1881]: Removed session 16. Nov 5 23:45:38.347325 sshd[5720]: Accepted publickey for core from 147.75.109.163 port 52360 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:38.350523 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:38.363391 systemd-logind[1881]: New session 17 of user core. Nov 5 23:45:38.371089 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 23:45:38.966894 sshd[5723]: Connection closed by 147.75.109.163 port 52360 Nov 5 23:45:38.967332 sshd-session[5720]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:38.982652 systemd[1]: sshd@16-172.31.26.188:22-147.75.109.163:52360.service: Deactivated successfully. Nov 5 23:45:38.983700 systemd-logind[1881]: Session 17 logged out. Waiting for processes to exit. Nov 5 23:45:38.990791 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 23:45:39.028741 systemd[1]: Started sshd@17-172.31.26.188:22-147.75.109.163:52374.service - OpenSSH per-connection server daemon (147.75.109.163:52374). Nov 5 23:45:39.032793 systemd-logind[1881]: Removed session 17. Nov 5 23:45:39.252129 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 52374 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:39.255901 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:39.270663 systemd-logind[1881]: New session 18 of user core. Nov 5 23:45:39.278943 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 23:45:40.628210 sshd[5742]: Connection closed by 147.75.109.163 port 52374 Nov 5 23:45:40.629834 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:40.643724 systemd[1]: sshd@17-172.31.26.188:22-147.75.109.163:52374.service: Deactivated successfully. Nov 5 23:45:40.648979 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 23:45:40.653353 systemd-logind[1881]: Session 18 logged out. Waiting for processes to exit. Nov 5 23:45:40.682438 systemd[1]: Started sshd@18-172.31.26.188:22-147.75.109.163:58110.service - OpenSSH per-connection server daemon (147.75.109.163:58110). Nov 5 23:45:40.685266 systemd-logind[1881]: Removed session 18. Nov 5 23:45:40.838004 containerd[1978]: time="2025-11-05T23:45:40.837904078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 23:45:40.920318 sshd[5758]: Accepted publickey for core from 147.75.109.163 port 58110 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:40.925580 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:40.945829 systemd-logind[1881]: New session 19 of user core. Nov 5 23:45:40.954322 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 23:45:41.142612 containerd[1978]: time="2025-11-05T23:45:41.142064299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:41.144133 containerd[1978]: time="2025-11-05T23:45:41.143944627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 23:45:41.144133 containerd[1978]: time="2025-11-05T23:45:41.144087451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 23:45:41.144710 kubelet[3516]: E1105 23:45:41.144553 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:41.147629 kubelet[3516]: E1105 23:45:41.144667 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 23:45:41.148834 kubelet[3516]: E1105 23:45:41.148719 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qb8qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-877d4847d-6rhkp_calico-system(a7fd47be-5341-4035-917c-acf91009ebea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:41.151937 kubelet[3516]: E1105 23:45:41.151857 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:45:41.694156 sshd[5764]: Connection closed by 147.75.109.163 port 58110 Nov 5 23:45:41.694923 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:41.705940 systemd-logind[1881]: Session 19 logged out. Waiting for processes to exit. Nov 5 23:45:41.707703 systemd[1]: sshd@18-172.31.26.188:22-147.75.109.163:58110.service: Deactivated successfully. Nov 5 23:45:41.718651 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 23:45:41.746159 systemd-logind[1881]: Removed session 19. Nov 5 23:45:41.751101 systemd[1]: Started sshd@19-172.31.26.188:22-147.75.109.163:58122.service - OpenSSH per-connection server daemon (147.75.109.163:58122). Nov 5 23:45:41.978451 sshd[5774]: Accepted publickey for core from 147.75.109.163 port 58122 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:41.982047 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:41.995191 systemd-logind[1881]: New session 20 of user core. Nov 5 23:45:42.001282 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 23:45:42.304655 sshd[5777]: Connection closed by 147.75.109.163 port 58122 Nov 5 23:45:42.306919 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:42.316068 systemd[1]: sshd@19-172.31.26.188:22-147.75.109.163:58122.service: Deactivated successfully. Nov 5 23:45:42.321751 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 23:45:42.324054 systemd-logind[1881]: Session 20 logged out. Waiting for processes to exit. Nov 5 23:45:42.327901 systemd-logind[1881]: Removed session 20. Nov 5 23:45:44.116428 update_engine[1882]: I20251105 23:45:44.115692 1882 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 23:45:44.116428 update_engine[1882]: I20251105 23:45:44.115820 1882 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 23:45:44.116428 update_engine[1882]: I20251105 23:45:44.116338 1882 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 23:45:44.118980 update_engine[1882]: E20251105 23:45:44.118739 1882 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 5 23:45:44.118980 update_engine[1882]: I20251105 23:45:44.118872 1882 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 5 23:45:44.840333 containerd[1978]: time="2025-11-05T23:45:44.840275450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 23:45:45.113470 containerd[1978]: time="2025-11-05T23:45:45.113315627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:45.116428 containerd[1978]: time="2025-11-05T23:45:45.116223731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 23:45:45.116428 containerd[1978]: time="2025-11-05T23:45:45.116372207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 23:45:45.116924 kubelet[3516]: E1105 23:45:45.116847 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:45.117563 kubelet[3516]: E1105 23:45:45.117039 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 23:45:45.118527 kubelet[3516]: E1105 23:45:45.117963 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:45.119820 containerd[1978]: time="2025-11-05T23:45:45.119693267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 23:45:45.384477 containerd[1978]: time="2025-11-05T23:45:45.384306852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:45.387252 containerd[1978]: time="2025-11-05T23:45:45.387175717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 23:45:45.388424 containerd[1978]: time="2025-11-05T23:45:45.387319921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:45.388508 kubelet[3516]: E1105 23:45:45.388278 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:45.388508 kubelet[3516]: E1105 23:45:45.388336 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 23:45:45.389489 containerd[1978]: time="2025-11-05T23:45:45.389120221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 23:45:45.391013 kubelet[3516]: E1105 23:45:45.390836 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx6pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-v2m4t_calico-system(03f18444-9872-4d73-bb60-c66c73cdfaff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:45.392782 kubelet[3516]: E1105 23:45:45.392709 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:45:45.675470 containerd[1978]: time="2025-11-05T23:45:45.675300182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:45.677711 containerd[1978]: time="2025-11-05T23:45:45.677623358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 23:45:45.677919 containerd[1978]: time="2025-11-05T23:45:45.677761862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 23:45:45.678189 kubelet[3516]: E1105 23:45:45.677942 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:45.678189 kubelet[3516]: E1105 23:45:45.678003 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 23:45:45.678451 kubelet[3516]: E1105 23:45:45.678186 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gfk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7km8q_calico-system(89a32bf2-ec2a-4f35-b294-b2467c662fb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:45.679523 kubelet[3516]: E1105 23:45:45.679337 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:45:47.343531 systemd[1]: Started sshd@20-172.31.26.188:22-147.75.109.163:58128.service - OpenSSH per-connection server daemon (147.75.109.163:58128). Nov 5 23:45:47.555729 sshd[5793]: Accepted publickey for core from 147.75.109.163 port 58128 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:47.560072 sshd-session[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:47.572065 systemd-logind[1881]: New session 21 of user core. Nov 5 23:45:47.577909 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 23:45:47.837687 containerd[1978]: time="2025-11-05T23:45:47.837066425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:47.840783 kubelet[3516]: E1105 23:45:47.839062 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:45:47.871659 sshd[5796]: Connection closed by 147.75.109.163 port 58128 Nov 5 23:45:47.870545 sshd-session[5793]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:47.886778 systemd[1]: sshd@20-172.31.26.188:22-147.75.109.163:58128.service: Deactivated successfully. Nov 5 23:45:47.887290 systemd-logind[1881]: Session 21 logged out. Waiting for processes to exit. Nov 5 23:45:47.896931 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 23:45:47.910238 systemd-logind[1881]: Removed session 21. Nov 5 23:45:48.092539 containerd[1978]: time="2025-11-05T23:45:48.092368154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:48.094795 containerd[1978]: time="2025-11-05T23:45:48.094705358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:48.095004 containerd[1978]: time="2025-11-05T23:45:48.094855514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:48.096874 kubelet[3516]: E1105 23:45:48.096801 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:48.097009 kubelet[3516]: E1105 23:45:48.096875 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:48.097199 kubelet[3516]: E1105 23:45:48.097069 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2gwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-759f658d45-zzvbq_calico-apiserver(76aeec5c-7e05-490c-a0a0-b95d9945b382): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:48.099065 kubelet[3516]: E1105 23:45:48.098985 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:45:51.836369 containerd[1978]: time="2025-11-05T23:45:51.836182797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 23:45:52.089065 containerd[1978]: time="2025-11-05T23:45:52.088826622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 23:45:52.091867 containerd[1978]: time="2025-11-05T23:45:52.091761918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 23:45:52.092148 containerd[1978]: time="2025-11-05T23:45:52.091802766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 23:45:52.092440 kubelet[3516]: E1105 23:45:52.092378 3516 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:52.093685 kubelet[3516]: E1105 23:45:52.092450 3516 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 23:45:52.094244 kubelet[3516]: E1105 23:45:52.094088 3516 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-759f658d45-5cwjh_calico-apiserver(07a15442-dee2-4408-9286-ad45a221772c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 23:45:52.095469 kubelet[3516]: E1105 23:45:52.095396 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:45:52.913544 systemd[1]: Started sshd@21-172.31.26.188:22-147.75.109.163:53686.service - OpenSSH per-connection server daemon (147.75.109.163:53686). Nov 5 23:45:53.121487 sshd[5809]: Accepted publickey for core from 147.75.109.163 port 53686 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:53.124931 sshd-session[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:53.134760 systemd-logind[1881]: New session 22 of user core. Nov 5 23:45:53.145883 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 23:45:53.451689 sshd[5812]: Connection closed by 147.75.109.163 port 53686 Nov 5 23:45:53.452639 sshd-session[5809]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:53.462729 systemd[1]: sshd@21-172.31.26.188:22-147.75.109.163:53686.service: Deactivated successfully. Nov 5 23:45:53.463698 systemd-logind[1881]: Session 22 logged out. Waiting for processes to exit. Nov 5 23:45:53.471295 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 23:45:53.478786 systemd-logind[1881]: Removed session 22. Nov 5 23:45:53.833793 kubelet[3516]: E1105 23:45:53.833678 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:45:54.115494 update_engine[1882]: I20251105 23:45:54.114672 1882 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 23:45:54.115494 update_engine[1882]: I20251105 23:45:54.114787 1882 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 23:45:54.115494 update_engine[1882]: I20251105 23:45:54.115345 1882 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 23:45:54.116828 update_engine[1882]: E20251105 23:45:54.116760 1882 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 5 23:45:54.116939 update_engine[1882]: I20251105 23:45:54.116902 1882 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 5 23:45:57.834008 kubelet[3516]: E1105 23:45:57.833788 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:45:58.504271 systemd[1]: Started sshd@22-172.31.26.188:22-147.75.109.163:53702.service - OpenSSH per-connection server daemon (147.75.109.163:53702). Nov 5 23:45:58.725050 sshd[5825]: Accepted publickey for core from 147.75.109.163 port 53702 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:45:58.727722 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:45:58.740171 systemd-logind[1881]: New session 23 of user core. Nov 5 23:45:58.747962 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 23:45:58.842162 kubelet[3516]: E1105 23:45:58.841849 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:45:59.109907 sshd[5828]: Connection closed by 147.75.109.163 port 53702 Nov 5 23:45:59.110836 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Nov 5 23:45:59.123881 systemd[1]: sshd@22-172.31.26.188:22-147.75.109.163:53702.service: Deactivated successfully. Nov 5 23:45:59.134199 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 23:45:59.138615 systemd-logind[1881]: Session 23 logged out. Waiting for processes to exit. Nov 5 23:45:59.143372 systemd-logind[1881]: Removed session 23. Nov 5 23:45:59.372845 containerd[1978]: time="2025-11-05T23:45:59.371791214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4b78ad3eff05f768f593314a5f33b64ca344a4b4df67abab3d7bd1446b118c3\" id:\"107d9d86f5bedc9433bb95abfd738c32a37d32e410e06a1e6b1ba6f75d0dd1e3\" pid:5848 exited_at:{seconds:1762386359 nanos:371324858}" Nov 5 23:45:59.834253 kubelet[3516]: E1105 23:45:59.834178 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:46:02.835165 kubelet[3516]: E1105 23:46:02.834053 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:46:04.119562 update_engine[1882]: I20251105 23:46:04.118652 1882 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 23:46:04.119562 update_engine[1882]: I20251105 23:46:04.118774 1882 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 23:46:04.119562 update_engine[1882]: I20251105 23:46:04.119400 1882 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 23:46:04.127631 update_engine[1882]: E20251105 23:46:04.126940 1882 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127089 1882 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127107 1882 omaha_request_action.cc:617] Omaha request response: Nov 5 23:46:04.127631 update_engine[1882]: E20251105 23:46:04.127240 1882 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127302 1882 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127317 1882 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127331 1882 update_attempter.cc:306] Processing Done. Nov 5 23:46:04.127631 update_engine[1882]: E20251105 23:46:04.127357 1882 update_attempter.cc:619] Update failed. Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127369 1882 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127383 1882 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 5 23:46:04.127631 update_engine[1882]: I20251105 23:46:04.127398 1882 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 5 23:46:04.129862 locksmithd[1941]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 5 23:46:04.130564 update_engine[1882]: I20251105 23:46:04.129719 1882 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 23:46:04.131752 update_engine[1882]: I20251105 23:46:04.130656 1882 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 23:46:04.131752 update_engine[1882]: I20251105 23:46:04.130694 1882 omaha_request_action.cc:272] Request: Nov 5 23:46:04.131752 update_engine[1882]: Nov 5 23:46:04.131752 update_engine[1882]: Nov 5 23:46:04.131752 update_engine[1882]: Nov 5 23:46:04.131752 update_engine[1882]: Nov 5 23:46:04.131752 update_engine[1882]: Nov 5 23:46:04.131752 update_engine[1882]: Nov 5 23:46:04.131752 update_engine[1882]: I20251105 23:46:04.130710 1882 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 23:46:04.131752 update_engine[1882]: I20251105 23:46:04.130759 1882 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 23:46:04.131752 update_engine[1882]: I20251105 23:46:04.131349 1882 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 23:46:04.134331 update_engine[1882]: E20251105 23:46:04.132822 1882 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 5 23:46:04.134331 update_engine[1882]: I20251105 23:46:04.133795 1882 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 5 23:46:04.134331 update_engine[1882]: I20251105 23:46:04.133819 1882 omaha_request_action.cc:617] Omaha request response: Nov 5 23:46:04.134331 update_engine[1882]: I20251105 23:46:04.133836 1882 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 5 23:46:04.134331 update_engine[1882]: I20251105 23:46:04.133852 1882 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 5 23:46:04.134331 update_engine[1882]: I20251105 23:46:04.133864 1882 update_attempter.cc:306] Processing Done. Nov 5 23:46:04.134331 update_engine[1882]: I20251105 23:46:04.133879 1882 update_attempter.cc:310] Error event sent. Nov 5 23:46:04.134331 update_engine[1882]: I20251105 23:46:04.133899 1882 update_check_scheduler.cc:74] Next update check in 46m30s Nov 5 23:46:04.135093 locksmithd[1941]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 5 23:46:04.152184 systemd[1]: Started sshd@23-172.31.26.188:22-147.75.109.163:59454.service - OpenSSH per-connection server daemon (147.75.109.163:59454). Nov 5 23:46:04.378170 sshd[5866]: Accepted publickey for core from 147.75.109.163 port 59454 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:46:04.383788 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:04.399680 systemd-logind[1881]: New session 24 of user core. Nov 5 23:46:04.404791 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 23:46:04.694903 sshd[5869]: Connection closed by 147.75.109.163 port 59454 Nov 5 23:46:04.694782 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:04.703208 systemd-logind[1881]: Session 24 logged out. Waiting for processes to exit. Nov 5 23:46:04.705087 systemd[1]: sshd@23-172.31.26.188:22-147.75.109.163:59454.service: Deactivated successfully. Nov 5 23:46:04.713437 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 23:46:04.722939 systemd-logind[1881]: Removed session 24. Nov 5 23:46:06.836786 kubelet[3516]: E1105 23:46:06.836709 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c" Nov 5 23:46:07.834190 kubelet[3516]: E1105 23:46:07.833285 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877d4847d-6rhkp" podUID="a7fd47be-5341-4035-917c-acf91009ebea" Nov 5 23:46:09.733381 systemd[1]: Started sshd@24-172.31.26.188:22-147.75.109.163:59468.service - OpenSSH per-connection server daemon (147.75.109.163:59468). Nov 5 23:46:09.951902 sshd[5884]: Accepted publickey for core from 147.75.109.163 port 59468 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:46:09.955047 sshd-session[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:09.966668 systemd-logind[1881]: New session 25 of user core. Nov 5 23:46:09.971953 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 23:46:10.267611 sshd[5887]: Connection closed by 147.75.109.163 port 59468 Nov 5 23:46:10.270946 sshd-session[5884]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:10.281485 systemd-logind[1881]: Session 25 logged out. Waiting for processes to exit. Nov 5 23:46:10.281782 systemd[1]: sshd@24-172.31.26.188:22-147.75.109.163:59468.service: Deactivated successfully. Nov 5 23:46:10.287215 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 23:46:10.294015 systemd-logind[1881]: Removed session 25. Nov 5 23:46:11.834308 kubelet[3516]: E1105 23:46:11.834226 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-v2m4t" podUID="03f18444-9872-4d73-bb60-c66c73cdfaff" Nov 5 23:46:13.838216 kubelet[3516]: E1105 23:46:13.838139 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7km8q" podUID="89a32bf2-ec2a-4f35-b294-b2467c662fb4" Nov 5 23:46:14.838843 kubelet[3516]: E1105 23:46:14.838733 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-zzvbq" podUID="76aeec5c-7e05-490c-a0a0-b95d9945b382" Nov 5 23:46:14.840378 kubelet[3516]: E1105 23:46:14.838964 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c996d876-qnn6p" podUID="20efcd41-054c-4821-9d54-ac97d532abc5" Nov 5 23:46:15.309155 systemd[1]: Started sshd@25-172.31.26.188:22-147.75.109.163:60236.service - OpenSSH per-connection server daemon (147.75.109.163:60236). Nov 5 23:46:15.516433 sshd[5902]: Accepted publickey for core from 147.75.109.163 port 60236 ssh2: RSA SHA256:RMubiTTBDsRj6wMnaDgb94uFku34NWf7aFjJruQworw Nov 5 23:46:15.519716 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 23:46:15.532872 systemd-logind[1881]: New session 26 of user core. Nov 5 23:46:15.538914 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 23:46:15.846142 sshd[5905]: Connection closed by 147.75.109.163 port 60236 Nov 5 23:46:15.846878 sshd-session[5902]: pam_unix(sshd:session): session closed for user core Nov 5 23:46:15.856970 systemd[1]: sshd@25-172.31.26.188:22-147.75.109.163:60236.service: Deactivated successfully. Nov 5 23:46:15.857758 systemd-logind[1881]: Session 26 logged out. Waiting for processes to exit. Nov 5 23:46:15.868398 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 23:46:15.873236 systemd-logind[1881]: Removed session 26. Nov 5 23:46:17.833725 kubelet[3516]: E1105 23:46:17.833168 3516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-759f658d45-5cwjh" podUID="07a15442-dee2-4408-9286-ad45a221772c"