May 27 02:46:54.092526 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 27 02:46:54.092570 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 01:20:04 -00 2025 May 27 02:46:54.092594 kernel: KASLR disabled due to lack of seed May 27 02:46:54.092610 kernel: efi: EFI v2.7 by EDK II May 27 02:46:54.092625 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78551598 May 27 02:46:54.092639 kernel: secureboot: Secure boot disabled May 27 02:46:54.092656 kernel: ACPI: Early table checksum verification disabled May 27 02:46:54.092671 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 27 02:46:54.092687 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 27 02:46:54.092701 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 27 02:46:54.092721 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 27 02:46:54.092736 kernel: ACPI: FACS 0x0000000078630000 000040 May 27 02:46:54.092751 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 27 02:46:54.092766 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 27 02:46:54.092784 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 27 02:46:54.092800 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 27 02:46:54.092820 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 27 02:46:54.092835 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 27 02:46:54.092851 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 27 02:46:54.092867 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 27 02:46:54.092883 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 27 02:46:54.092898 kernel: printk: legacy bootconsole [uart0] enabled May 27 02:46:54.092914 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 02:46:54.092929 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 27 02:46:54.092945 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] May 27 02:46:54.092982 kernel: Zone ranges: May 27 02:46:54.093009 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 27 02:46:54.093025 kernel: DMA32 empty May 27 02:46:54.093041 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 27 02:46:54.093057 kernel: Device empty May 27 02:46:54.093073 kernel: Movable zone start for each node May 27 02:46:54.093088 kernel: Early memory node ranges May 27 02:46:54.093104 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 27 02:46:54.093120 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 27 02:46:54.093136 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 27 02:46:54.093151 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 27 02:46:54.093167 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 27 02:46:54.093182 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 27 02:46:54.093202 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 27 02:46:54.093218 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 27 02:46:54.093241 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 27 02:46:54.093258 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 27 02:46:54.093275 kernel: psci: probing for conduit method from ACPI. May 27 02:46:54.093295 kernel: psci: PSCIv1.0 detected in firmware. May 27 02:46:54.093311 kernel: psci: Using standard PSCI v0.2 function IDs May 27 02:46:54.093328 kernel: psci: Trusted OS migration not required May 27 02:46:54.093344 kernel: psci: SMC Calling Convention v1.1 May 27 02:46:54.093360 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 02:46:54.093377 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 02:46:54.093394 kernel: pcpu-alloc: [0] 0 [0] 1 May 27 02:46:54.093410 kernel: Detected PIPT I-cache on CPU0 May 27 02:46:54.093426 kernel: CPU features: detected: GIC system register CPU interface May 27 02:46:54.093443 kernel: CPU features: detected: Spectre-v2 May 27 02:46:54.093459 kernel: CPU features: detected: Spectre-v3a May 27 02:46:54.093475 kernel: CPU features: detected: Spectre-BHB May 27 02:46:54.093495 kernel: CPU features: detected: ARM erratum 1742098 May 27 02:46:54.093512 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 27 02:46:54.093528 kernel: alternatives: applying boot alternatives May 27 02:46:54.093547 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:46:54.093565 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 02:46:54.093581 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 02:46:54.093598 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 02:46:54.093614 kernel: Fallback order for Node 0: 0 May 27 02:46:54.093631 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 May 27 02:46:54.093647 kernel: Policy zone: Normal May 27 02:46:54.093667 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 02:46:54.093684 kernel: software IO TLB: area num 2. May 27 02:46:54.093700 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 27 02:46:54.093717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 02:46:54.093734 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 02:46:54.093751 kernel: rcu: RCU event tracing is enabled. May 27 02:46:54.093768 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 02:46:54.093785 kernel: Trampoline variant of Tasks RCU enabled. May 27 02:46:54.093802 kernel: Tracing variant of Tasks RCU enabled. May 27 02:46:54.093818 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 02:46:54.093835 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 02:46:54.093852 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 02:46:54.093872 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 02:46:54.093889 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 02:46:54.093905 kernel: GICv3: 96 SPIs implemented May 27 02:46:54.093921 kernel: GICv3: 0 Extended SPIs implemented May 27 02:46:54.093938 kernel: Root IRQ handler: gic_handle_irq May 27 02:46:54.093954 kernel: GICv3: GICv3 features: 16 PPIs May 27 02:46:54.093999 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 02:46:54.094017 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 27 02:46:54.094034 kernel: ITS [mem 0x10080000-0x1009ffff] May 27 02:46:54.094051 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) May 27 02:46:54.094068 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) May 27 02:46:54.094091 kernel: GICv3: using LPI property table @0x00000004000e0000 May 27 02:46:54.094108 kernel: ITS: Using hypervisor restricted LPI range [128] May 27 02:46:54.094125 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 May 27 02:46:54.094142 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 02:46:54.094158 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 27 02:46:54.094175 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 27 02:46:54.094192 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 27 02:46:54.094209 kernel: Console: colour dummy device 80x25 May 27 02:46:54.094226 kernel: printk: legacy console [tty1] enabled May 27 02:46:54.094242 kernel: ACPI: Core revision 20240827 May 27 02:46:54.094260 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 27 02:46:54.094281 kernel: pid_max: default: 32768 minimum: 301 May 27 02:46:54.094298 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 02:46:54.094314 kernel: landlock: Up and running. May 27 02:46:54.094331 kernel: SELinux: Initializing. May 27 02:46:54.094348 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:46:54.094364 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:46:54.094381 kernel: rcu: Hierarchical SRCU implementation. May 27 02:46:54.094398 kernel: rcu: Max phase no-delay instances is 400. May 27 02:46:54.094415 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 02:46:54.094436 kernel: Remapping and enabling EFI services. May 27 02:46:54.094452 kernel: smp: Bringing up secondary CPUs ... May 27 02:46:54.094468 kernel: Detected PIPT I-cache on CPU1 May 27 02:46:54.094485 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 27 02:46:54.094502 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 May 27 02:46:54.094519 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 27 02:46:54.094535 kernel: smp: Brought up 1 node, 2 CPUs May 27 02:46:54.094552 kernel: SMP: Total of 2 processors activated. May 27 02:46:54.094568 kernel: CPU: All CPU(s) started at EL1 May 27 02:46:54.094589 kernel: CPU features: detected: 32-bit EL0 Support May 27 02:46:54.094616 kernel: CPU features: detected: 32-bit EL1 Support May 27 02:46:54.094634 kernel: CPU features: detected: CRC32 instructions May 27 02:46:54.094655 kernel: alternatives: applying system-wide alternatives May 27 02:46:54.094691 kernel: Memory: 3813536K/4030464K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 212156K reserved, 0K cma-reserved) May 27 02:46:54.094711 kernel: devtmpfs: initialized May 27 02:46:54.094729 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 02:46:54.094747 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 02:46:54.094770 kernel: 17024 pages in range for non-PLT usage May 27 02:46:54.094788 kernel: 508544 pages in range for PLT usage May 27 02:46:54.094805 kernel: pinctrl core: initialized pinctrl subsystem May 27 02:46:54.094822 kernel: SMBIOS 3.0.0 present. May 27 02:46:54.094840 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 27 02:46:54.094857 kernel: DMI: Memory slots populated: 0/0 May 27 02:46:54.094875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 02:46:54.094892 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 02:46:54.094910 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 02:46:54.094932 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 02:46:54.094950 kernel: audit: initializing netlink subsys (disabled) May 27 02:46:54.095029 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 May 27 02:46:54.095058 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 02:46:54.095077 kernel: cpuidle: using governor menu May 27 02:46:54.095095 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 02:46:54.095113 kernel: ASID allocator initialised with 65536 entries May 27 02:46:54.095130 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 02:46:54.095155 kernel: Serial: AMBA PL011 UART driver May 27 02:46:54.095173 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 02:46:54.095191 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 02:46:54.095208 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 02:46:54.095225 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 02:46:54.095243 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 02:46:54.095261 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 02:46:54.095278 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 02:46:54.095295 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 02:46:54.095317 kernel: ACPI: Added _OSI(Module Device) May 27 02:46:54.095334 kernel: ACPI: Added _OSI(Processor Device) May 27 02:46:54.095353 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 02:46:54.095370 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 02:46:54.095388 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 02:46:54.095405 kernel: ACPI: Interpreter enabled May 27 02:46:54.095436 kernel: ACPI: Using GIC for interrupt routing May 27 02:46:54.095457 kernel: ACPI: MCFG table detected, 1 entries May 27 02:46:54.095474 kernel: ACPI: CPU0 has been hot-added May 27 02:46:54.095492 kernel: ACPI: CPU1 has been hot-added May 27 02:46:54.095515 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 27 02:46:54.095806 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 02:46:54.096532 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 02:46:54.096747 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 02:46:54.096937 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 27 02:46:54.097172 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 27 02:46:54.097198 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 27 02:46:54.097224 kernel: acpiphp: Slot [1] registered May 27 02:46:54.097242 kernel: acpiphp: Slot [2] registered May 27 02:46:54.097260 kernel: acpiphp: Slot [3] registered May 27 02:46:54.097277 kernel: acpiphp: Slot [4] registered May 27 02:46:54.097294 kernel: acpiphp: Slot [5] registered May 27 02:46:54.097312 kernel: acpiphp: Slot [6] registered May 27 02:46:54.097329 kernel: acpiphp: Slot [7] registered May 27 02:46:54.097347 kernel: acpiphp: Slot [8] registered May 27 02:46:54.097364 kernel: acpiphp: Slot [9] registered May 27 02:46:54.097385 kernel: acpiphp: Slot [10] registered May 27 02:46:54.097403 kernel: acpiphp: Slot [11] registered May 27 02:46:54.097420 kernel: acpiphp: Slot [12] registered May 27 02:46:54.097438 kernel: acpiphp: Slot [13] registered May 27 02:46:54.097455 kernel: acpiphp: Slot [14] registered May 27 02:46:54.097472 kernel: acpiphp: Slot [15] registered May 27 02:46:54.097490 kernel: acpiphp: Slot [16] registered May 27 02:46:54.097508 kernel: acpiphp: Slot [17] registered May 27 02:46:54.097525 kernel: acpiphp: Slot [18] registered May 27 02:46:54.097542 kernel: acpiphp: Slot [19] registered May 27 02:46:54.097563 kernel: acpiphp: Slot [20] registered May 27 02:46:54.097581 kernel: acpiphp: Slot [21] registered May 27 02:46:54.097598 kernel: acpiphp: Slot [22] registered May 27 02:46:54.097615 kernel: acpiphp: Slot [23] registered May 27 02:46:54.097632 kernel: acpiphp: Slot [24] registered May 27 02:46:54.097650 kernel: acpiphp: Slot [25] registered May 27 02:46:54.097667 kernel: acpiphp: Slot [26] registered May 27 02:46:54.097684 kernel: acpiphp: Slot [27] registered May 27 02:46:54.097702 kernel: acpiphp: Slot [28] registered May 27 02:46:54.097723 kernel: acpiphp: Slot [29] registered May 27 02:46:54.097740 kernel: acpiphp: Slot [30] registered May 27 02:46:54.097757 kernel: acpiphp: Slot [31] registered May 27 02:46:54.097775 kernel: PCI host bridge to bus 0000:00 May 27 02:46:54.097984 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 27 02:46:54.098174 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 02:46:54.098407 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 27 02:46:54.098587 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 27 02:46:54.098846 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint May 27 02:46:54.099090 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint May 27 02:46:54.099299 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] May 27 02:46:54.099507 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint May 27 02:46:54.099703 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] May 27 02:46:54.099896 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 27 02:46:54.100154 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint May 27 02:46:54.100417 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] May 27 02:46:54.100615 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] May 27 02:46:54.100808 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] May 27 02:46:54.101025 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 27 02:46:54.101224 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned May 27 02:46:54.101426 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned May 27 02:46:54.102176 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned May 27 02:46:54.104238 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned May 27 02:46:54.104472 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned May 27 02:46:54.104658 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 27 02:46:54.104839 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 02:46:54.105069 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 27 02:46:54.105098 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 02:46:54.105127 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 02:46:54.105146 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 02:46:54.105164 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 02:46:54.105182 kernel: iommu: Default domain type: Translated May 27 02:46:54.105200 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 02:46:54.105218 kernel: efivars: Registered efivars operations May 27 02:46:54.105236 kernel: vgaarb: loaded May 27 02:46:54.105254 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 02:46:54.105271 kernel: VFS: Disk quotas dquot_6.6.0 May 27 02:46:54.105293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 02:46:54.105311 kernel: pnp: PnP ACPI init May 27 02:46:54.105567 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 27 02:46:54.105600 kernel: pnp: PnP ACPI: found 1 devices May 27 02:46:54.105620 kernel: NET: Registered PF_INET protocol family May 27 02:46:54.105638 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 02:46:54.105658 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 02:46:54.105677 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 02:46:54.105704 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 02:46:54.105724 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 02:46:54.105743 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 02:46:54.105762 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:46:54.105780 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:46:54.105800 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 02:46:54.105819 kernel: PCI: CLS 0 bytes, default 64 May 27 02:46:54.105837 kernel: kvm [1]: HYP mode not available May 27 02:46:54.105857 kernel: Initialise system trusted keyrings May 27 02:46:54.105881 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 02:46:54.105901 kernel: Key type asymmetric registered May 27 02:46:54.105920 kernel: Asymmetric key parser 'x509' registered May 27 02:46:54.105938 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 02:46:54.105956 kernel: io scheduler mq-deadline registered May 27 02:46:54.106014 kernel: io scheduler kyber registered May 27 02:46:54.106034 kernel: io scheduler bfq registered May 27 02:46:54.106320 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 27 02:46:54.106351 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 02:46:54.106379 kernel: ACPI: button: Power Button [PWRB] May 27 02:46:54.106398 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 27 02:46:54.106417 kernel: ACPI: button: Sleep Button [SLPB] May 27 02:46:54.106437 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 02:46:54.106457 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 27 02:46:54.106685 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 27 02:46:54.106715 kernel: printk: legacy console [ttyS0] disabled May 27 02:46:54.106733 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 27 02:46:54.106757 kernel: printk: legacy console [ttyS0] enabled May 27 02:46:54.106775 kernel: printk: legacy bootconsole [uart0] disabled May 27 02:46:54.106793 kernel: thunder_xcv, ver 1.0 May 27 02:46:54.106811 kernel: thunder_bgx, ver 1.0 May 27 02:46:54.106828 kernel: nicpf, ver 1.0 May 27 02:46:54.106846 kernel: nicvf, ver 1.0 May 27 02:46:54.109629 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 02:46:54.109847 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T02:46:53 UTC (1748314013) May 27 02:46:54.109872 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 02:46:54.109900 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available May 27 02:46:54.109918 kernel: NET: Registered PF_INET6 protocol family May 27 02:46:54.109936 kernel: watchdog: NMI not fully supported May 27 02:46:54.109954 kernel: watchdog: Hard watchdog permanently disabled May 27 02:46:54.110055 kernel: Segment Routing with IPv6 May 27 02:46:54.110074 kernel: In-situ OAM (IOAM) with IPv6 May 27 02:46:54.110093 kernel: NET: Registered PF_PACKET protocol family May 27 02:46:54.110111 kernel: Key type dns_resolver registered May 27 02:46:54.110129 kernel: registered taskstats version 1 May 27 02:46:54.110153 kernel: Loading compiled-in X.509 certificates May 27 02:46:54.110171 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 6bbf5412ef1f8a32378a640b6d048f74e6d74df0' May 27 02:46:54.110189 kernel: Demotion targets for Node 0: null May 27 02:46:54.110206 kernel: Key type .fscrypt registered May 27 02:46:54.110224 kernel: Key type fscrypt-provisioning registered May 27 02:46:54.110242 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 02:46:54.110260 kernel: ima: Allocated hash algorithm: sha1 May 27 02:46:54.110277 kernel: ima: No architecture policies found May 27 02:46:54.110295 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 02:46:54.110318 kernel: clk: Disabling unused clocks May 27 02:46:54.110336 kernel: PM: genpd: Disabling unused power domains May 27 02:46:54.110354 kernel: Warning: unable to open an initial console. May 27 02:46:54.110372 kernel: Freeing unused kernel memory: 39424K May 27 02:46:54.110390 kernel: Run /init as init process May 27 02:46:54.110408 kernel: with arguments: May 27 02:46:54.110425 kernel: /init May 27 02:46:54.110442 kernel: with environment: May 27 02:46:54.110459 kernel: HOME=/ May 27 02:46:54.110480 kernel: TERM=linux May 27 02:46:54.110498 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 02:46:54.110517 systemd[1]: Successfully made /usr/ read-only. May 27 02:46:54.110542 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:46:54.110562 systemd[1]: Detected virtualization amazon. May 27 02:46:54.110581 systemd[1]: Detected architecture arm64. May 27 02:46:54.110600 systemd[1]: Running in initrd. May 27 02:46:54.110623 systemd[1]: No hostname configured, using default hostname. May 27 02:46:54.110644 systemd[1]: Hostname set to . May 27 02:46:54.110663 systemd[1]: Initializing machine ID from VM UUID. May 27 02:46:54.110705 systemd[1]: Queued start job for default target initrd.target. May 27 02:46:54.110725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:46:54.110744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:46:54.110765 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 02:46:54.110785 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:46:54.110810 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 02:46:54.110831 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 02:46:54.110853 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 02:46:54.110872 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 02:46:54.110892 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:46:54.110911 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:46:54.110930 systemd[1]: Reached target paths.target - Path Units. May 27 02:46:54.110954 systemd[1]: Reached target slices.target - Slice Units. May 27 02:46:54.110995 systemd[1]: Reached target swap.target - Swaps. May 27 02:46:54.111016 systemd[1]: Reached target timers.target - Timer Units. May 27 02:46:54.111035 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:46:54.111055 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:46:54.111075 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 02:46:54.111094 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 02:46:54.111113 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:46:54.111132 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:46:54.111160 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:46:54.111180 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:46:54.111199 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 02:46:54.111218 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:46:54.111237 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 02:46:54.111257 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 02:46:54.111277 systemd[1]: Starting systemd-fsck-usr.service... May 27 02:46:54.111296 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:46:54.111320 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:46:54.111339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:46:54.111375 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 02:46:54.111411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:46:54.111439 systemd[1]: Finished systemd-fsck-usr.service. May 27 02:46:54.111463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:46:54.111530 systemd-journald[259]: Collecting audit messages is disabled. May 27 02:46:54.111571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 02:46:54.111595 kernel: Bridge firewalling registered May 27 02:46:54.111619 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:46:54.111657 systemd-journald[259]: Journal started May 27 02:46:54.111700 systemd-journald[259]: Runtime Journal (/run/log/journal/ec28ddf0c94c2767ceffa89babf960b2) is 8M, max 75.3M, 67.3M free. May 27 02:46:54.054264 systemd-modules-load[261]: Inserted module 'overlay' May 27 02:46:54.119487 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:46:54.101867 systemd-modules-load[261]: Inserted module 'br_netfilter' May 27 02:46:54.119767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:46:54.127010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:46:54.137555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 02:46:54.145252 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:46:54.154420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:46:54.172186 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:46:54.197051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:46:54.202066 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:46:54.217686 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 02:46:54.225022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:46:54.230017 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 02:46:54.233956 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:46:54.248722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:46:54.276275 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:46:54.349666 systemd-resolved[300]: Positive Trust Anchors: May 27 02:46:54.349700 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:46:54.349762 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:46:54.435997 kernel: SCSI subsystem initialized May 27 02:46:54.443999 kernel: Loading iSCSI transport class v2.0-870. May 27 02:46:54.456000 kernel: iscsi: registered transport (tcp) May 27 02:46:54.478003 kernel: iscsi: registered transport (qla4xxx) May 27 02:46:54.478075 kernel: QLogic iSCSI HBA Driver May 27 02:46:54.511144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:46:54.544551 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:46:54.555387 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:46:54.623005 kernel: random: crng init done May 27 02:46:54.623330 systemd-resolved[300]: Defaulting to hostname 'linux'. May 27 02:46:54.627062 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:46:54.632673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:46:54.654219 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 02:46:54.661366 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 02:46:54.752019 kernel: raid6: neonx8 gen() 6495 MB/s May 27 02:46:54.768013 kernel: raid6: neonx4 gen() 6530 MB/s May 27 02:46:54.785019 kernel: raid6: neonx2 gen() 5421 MB/s May 27 02:46:54.802013 kernel: raid6: neonx1 gen() 3940 MB/s May 27 02:46:54.819008 kernel: raid6: int64x8 gen() 3660 MB/s May 27 02:46:54.836028 kernel: raid6: int64x4 gen() 3710 MB/s May 27 02:46:54.853030 kernel: raid6: int64x2 gen() 3563 MB/s May 27 02:46:54.870975 kernel: raid6: int64x1 gen() 2733 MB/s May 27 02:46:54.871045 kernel: raid6: using algorithm neonx4 gen() 6530 MB/s May 27 02:46:54.889014 kernel: raid6: .... xor() 4612 MB/s, rmw enabled May 27 02:46:54.889089 kernel: raid6: using neon recovery algorithm May 27 02:46:54.897576 kernel: xor: measuring software checksum speed May 27 02:46:54.897667 kernel: 8regs : 12998 MB/sec May 27 02:46:54.898700 kernel: 32regs : 13041 MB/sec May 27 02:46:54.900940 kernel: arm64_neon : 8623 MB/sec May 27 02:46:54.901033 kernel: xor: using function: 32regs (13041 MB/sec) May 27 02:46:54.996024 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 02:46:55.008564 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 02:46:55.015254 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:46:55.073389 systemd-udevd[508]: Using default interface naming scheme 'v255'. May 27 02:46:55.085675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:46:55.094791 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 02:46:55.136485 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation May 27 02:46:55.184838 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:46:55.191566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:46:55.325756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:46:55.335216 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 02:46:55.501004 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 27 02:46:55.501083 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 02:46:55.501119 kernel: nvme nvme0: pci function 0000:00:04.0 May 27 02:46:55.505444 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 27 02:46:55.512007 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 02:46:55.515678 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 27 02:46:55.516199 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 27 02:46:55.515312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:46:55.515430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:46:55.523315 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:46:55.530989 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:04:c7:73:1e:b1 May 27 02:46:55.531295 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 02:46:55.532783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:46:55.537633 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 02:46:55.543109 kernel: GPT:9289727 != 16777215 May 27 02:46:55.543180 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 02:46:55.543205 kernel: GPT:9289727 != 16777215 May 27 02:46:55.545798 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 02:46:55.545881 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 02:46:55.549829 (udev-worker)[555]: Network interface NamePolicy= disabled on kernel command line. May 27 02:46:55.580590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:46:55.601011 kernel: nvme nvme0: using unchecked data buffer May 27 02:46:55.766061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 27 02:46:55.809590 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 02:46:55.817004 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 02:46:55.858267 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 27 02:46:55.881022 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 27 02:46:55.883744 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 27 02:46:55.901600 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:46:55.909482 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:46:55.912765 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:46:55.919006 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 02:46:55.926171 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 02:46:55.963000 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 02:46:55.963763 disk-uuid[686]: Primary Header is updated. May 27 02:46:55.963763 disk-uuid[686]: Secondary Entries is updated. May 27 02:46:55.963763 disk-uuid[686]: Secondary Header is updated. May 27 02:46:55.968394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 02:46:56.992025 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 02:46:56.992632 disk-uuid[692]: The operation has completed successfully. May 27 02:46:57.187170 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 02:46:57.189102 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 02:46:57.255324 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 02:46:57.293660 sh[954]: Success May 27 02:46:57.322009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 02:46:57.322087 kernel: device-mapper: uevent: version 1.0.3 May 27 02:46:57.325009 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 02:46:57.338250 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 02:46:57.475360 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 02:46:57.481591 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 02:46:57.493419 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 02:46:57.523335 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 02:46:57.523414 kernel: BTRFS: device fsid 5c6341ea-4eb5-44b6-ac57-c4d29847e384 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (978) May 27 02:46:57.527635 kernel: BTRFS info (device dm-0): first mount of filesystem 5c6341ea-4eb5-44b6-ac57-c4d29847e384 May 27 02:46:57.527690 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:57.528806 kernel: BTRFS info (device dm-0): using free-space-tree May 27 02:46:57.660665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 02:46:57.667034 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 02:46:57.671677 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 02:46:57.676797 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 02:46:57.682220 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 02:46:57.747005 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1013) May 27 02:46:57.751151 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:57.751224 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:57.752671 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 02:46:57.768008 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:57.771027 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 02:46:57.777790 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 02:46:57.867939 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:46:57.875869 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:46:57.948538 systemd-networkd[1147]: lo: Link UP May 27 02:46:57.949262 systemd-networkd[1147]: lo: Gained carrier May 27 02:46:57.952182 systemd-networkd[1147]: Enumeration completed May 27 02:46:57.952804 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:46:57.952811 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:46:57.954062 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:46:57.961377 systemd-networkd[1147]: eth0: Link UP May 27 02:46:57.961384 systemd-networkd[1147]: eth0: Gained carrier May 27 02:46:57.961405 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:46:57.974906 systemd[1]: Reached target network.target - Network. May 27 02:46:58.001057 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.27.90/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 02:46:58.202692 ignition[1072]: Ignition 2.21.0 May 27 02:46:58.202718 ignition[1072]: Stage: fetch-offline May 27 02:46:58.203851 ignition[1072]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:58.203875 ignition[1072]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:58.204654 ignition[1072]: Ignition finished successfully May 27 02:46:58.215273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:46:58.221620 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 02:46:58.276690 ignition[1159]: Ignition 2.21.0 May 27 02:46:58.276721 ignition[1159]: Stage: fetch May 27 02:46:58.277251 ignition[1159]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:58.277276 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:58.277438 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:58.290332 ignition[1159]: PUT result: OK May 27 02:46:58.294666 ignition[1159]: parsed url from cmdline: "" May 27 02:46:58.294691 ignition[1159]: no config URL provided May 27 02:46:58.294706 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" May 27 02:46:58.294732 ignition[1159]: no config at "/usr/lib/ignition/user.ign" May 27 02:46:58.294764 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:58.298815 ignition[1159]: PUT result: OK May 27 02:46:58.298928 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 27 02:46:58.301349 ignition[1159]: GET result: OK May 27 02:46:58.301515 ignition[1159]: parsing config with SHA512: 2de43ef474fae4f6df46344067c2e1e69d22ecf6f8c9c35ec7fa1702f0b755176e8b198d52d2f33fc7f954fefcb651b1b1bcc45cfcdfd851bcf598b7bb9a3a34 May 27 02:46:58.317194 unknown[1159]: fetched base config from "system" May 27 02:46:58.317769 ignition[1159]: fetch: fetch complete May 27 02:46:58.317215 unknown[1159]: fetched base config from "system" May 27 02:46:58.317780 ignition[1159]: fetch: fetch passed May 27 02:46:58.317229 unknown[1159]: fetched user config from "aws" May 27 02:46:58.317856 ignition[1159]: Ignition finished successfully May 27 02:46:58.325474 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 02:46:58.337514 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 02:46:58.378556 ignition[1166]: Ignition 2.21.0 May 27 02:46:58.379343 ignition[1166]: Stage: kargs May 27 02:46:58.379885 ignition[1166]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:58.379917 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:58.380118 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:58.382365 ignition[1166]: PUT result: OK May 27 02:46:58.393821 ignition[1166]: kargs: kargs passed May 27 02:46:58.395799 ignition[1166]: Ignition finished successfully May 27 02:46:58.401055 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 02:46:58.406815 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 02:46:58.444328 ignition[1172]: Ignition 2.21.0 May 27 02:46:58.444357 ignition[1172]: Stage: disks May 27 02:46:58.445887 ignition[1172]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:58.445913 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:58.446395 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:58.451618 ignition[1172]: PUT result: OK May 27 02:46:58.459855 ignition[1172]: disks: disks passed May 27 02:46:58.460004 ignition[1172]: Ignition finished successfully May 27 02:46:58.464463 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 02:46:58.469943 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 02:46:58.474637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 02:46:58.482136 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:46:58.485045 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:46:58.489799 systemd[1]: Reached target basic.target - Basic System. May 27 02:46:58.496049 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 02:46:58.544210 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 02:46:58.550116 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 02:46:58.557255 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 02:46:58.711093 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5656cec4-efbd-4a2d-be98-2263e6ae16bd r/w with ordered data mode. Quota mode: none. May 27 02:46:58.712298 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 02:46:58.716191 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 02:46:58.722629 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:46:58.727682 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 02:46:58.736153 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 02:46:58.736270 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 02:46:58.736322 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:46:58.760883 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 02:46:58.767327 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 02:46:58.784010 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1200) May 27 02:46:58.788934 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:58.789008 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:58.790188 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 02:46:58.810165 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:46:59.168349 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory May 27 02:46:59.188079 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory May 27 02:46:59.196223 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory May 27 02:46:59.202509 systemd-networkd[1147]: eth0: Gained IPv6LL May 27 02:46:59.218466 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory May 27 02:46:59.487762 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 02:46:59.494328 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 02:46:59.498813 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 02:46:59.526712 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 02:46:59.529867 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:59.565684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 02:46:59.574360 ignition[1314]: INFO : Ignition 2.21.0 May 27 02:46:59.574360 ignition[1314]: INFO : Stage: mount May 27 02:46:59.577562 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:46:59.577562 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:59.577562 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:59.584404 ignition[1314]: INFO : PUT result: OK May 27 02:46:59.591668 ignition[1314]: INFO : mount: mount passed May 27 02:46:59.591668 ignition[1314]: INFO : Ignition finished successfully May 27 02:46:59.595219 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 02:46:59.613276 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 02:46:59.715398 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:46:59.756009 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1328) May 27 02:46:59.759762 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:59.759802 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:59.759829 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 02:46:59.782822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:46:59.821701 ignition[1345]: INFO : Ignition 2.21.0 May 27 02:46:59.821701 ignition[1345]: INFO : Stage: files May 27 02:46:59.825111 ignition[1345]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:46:59.825111 ignition[1345]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:59.825111 ignition[1345]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:59.831957 ignition[1345]: INFO : PUT result: OK May 27 02:46:59.835880 ignition[1345]: DEBUG : files: compiled without relabeling support, skipping May 27 02:46:59.847690 ignition[1345]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 02:46:59.850232 ignition[1345]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 02:46:59.856202 ignition[1345]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 02:46:59.859210 ignition[1345]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 02:46:59.861771 ignition[1345]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 02:46:59.860209 unknown[1345]: wrote ssh authorized keys file for user: core May 27 02:46:59.875088 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 02:46:59.880317 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 27 02:46:59.963286 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 02:47:00.097695 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 02:47:00.102008 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 02:47:00.105662 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 02:47:00.109064 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 02:47:00.112770 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 02:47:00.112770 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:47:00.119921 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:47:00.123220 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:47:00.123220 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:47:00.134661 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:47:00.134661 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:47:00.134661 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:47:00.148001 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:47:00.148001 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:47:00.148001 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 27 02:47:00.682863 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 02:47:03.189112 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 02:47:03.193351 ignition[1345]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 02:47:03.202093 ignition[1345]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:47:03.212087 ignition[1345]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:47:03.212087 ignition[1345]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 02:47:03.212087 ignition[1345]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 27 02:47:03.212087 ignition[1345]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 27 02:47:03.224184 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 02:47:03.224184 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 02:47:03.224184 ignition[1345]: INFO : files: files passed May 27 02:47:03.224184 ignition[1345]: INFO : Ignition finished successfully May 27 02:47:03.232419 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 02:47:03.239070 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 02:47:03.245339 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 02:47:03.264351 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 02:47:03.267739 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 02:47:03.287694 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:47:03.287694 initrd-setup-root-after-ignition[1375]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 02:47:03.295200 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:47:03.299686 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:47:03.318219 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 02:47:03.323065 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 02:47:03.415596 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 02:47:03.417929 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 02:47:03.423318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 02:47:03.440991 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 02:47:03.445239 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 02:47:03.447663 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 02:47:03.486866 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:47:03.493128 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 02:47:03.533560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 02:47:03.539218 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:47:03.542260 systemd[1]: Stopped target timers.target - Timer Units. May 27 02:47:03.546070 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 02:47:03.546179 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:47:03.549499 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 02:47:03.550509 systemd[1]: Stopped target basic.target - Basic System. May 27 02:47:03.550824 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 02:47:03.563154 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:47:03.566395 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 02:47:03.571830 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 02:47:03.576717 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 02:47:03.581467 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:47:03.588626 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 02:47:03.592732 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 02:47:03.608934 systemd[1]: Stopped target swap.target - Swaps. May 27 02:47:03.612501 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 02:47:03.612607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 02:47:03.620390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 02:47:03.620746 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:47:03.621481 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 02:47:03.629420 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:47:03.633484 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 02:47:03.633580 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 02:47:03.642838 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 02:47:03.642934 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:47:03.651600 systemd[1]: ignition-files.service: Deactivated successfully. May 27 02:47:03.651682 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 02:47:03.666247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 02:47:03.673264 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 02:47:03.681580 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 02:47:03.681710 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:47:03.684899 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 02:47:03.685033 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:47:03.700136 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 02:47:03.702679 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 02:47:03.736342 ignition[1399]: INFO : Ignition 2.21.0 May 27 02:47:03.739173 ignition[1399]: INFO : Stage: umount May 27 02:47:03.739173 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:47:03.739173 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:47:03.746468 ignition[1399]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:47:03.750426 ignition[1399]: INFO : PUT result: OK May 27 02:47:03.764022 ignition[1399]: INFO : umount: umount passed May 27 02:47:03.764022 ignition[1399]: INFO : Ignition finished successfully May 27 02:47:03.773717 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 02:47:03.778094 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 02:47:03.784118 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 02:47:03.785813 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 02:47:03.785900 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 02:47:03.790761 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 02:47:03.790861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 02:47:03.799054 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 02:47:03.799148 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 02:47:03.801514 systemd[1]: Stopped target network.target - Network. May 27 02:47:03.805686 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 02:47:03.805774 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:47:03.815757 systemd[1]: Stopped target paths.target - Path Units. May 27 02:47:03.816855 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 02:47:03.820782 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:47:03.824727 systemd[1]: Stopped target slices.target - Slice Units. May 27 02:47:03.828675 systemd[1]: Stopped target sockets.target - Socket Units. May 27 02:47:03.832336 systemd[1]: iscsid.socket: Deactivated successfully. May 27 02:47:03.832408 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:47:03.832608 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 02:47:03.832669 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:47:03.832904 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 02:47:03.833181 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 02:47:03.846866 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 02:47:03.847207 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 02:47:03.851642 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 02:47:03.852427 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 02:47:03.886779 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 02:47:03.888608 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 02:47:03.899137 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 02:47:03.901741 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 02:47:03.903460 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 02:47:03.911518 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 02:47:03.914601 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 02:47:03.917990 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 02:47:03.918088 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 02:47:03.926493 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 02:47:03.931041 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 02:47:03.931167 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:47:03.942491 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:47:03.942594 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:47:03.949498 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 02:47:03.949595 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 02:47:03.953290 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 02:47:03.953373 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:47:03.955617 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:47:03.961393 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 02:47:03.961512 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 02:47:03.962225 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 02:47:03.962400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 02:47:03.973108 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 02:47:03.973278 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 02:47:03.985179 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 02:47:03.987784 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:47:03.994866 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 02:47:03.997562 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 02:47:04.002505 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 02:47:04.002577 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:47:04.004864 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 02:47:04.006206 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 02:47:04.010560 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 02:47:04.010666 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 02:47:04.011466 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 02:47:04.011546 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:47:04.038175 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 02:47:04.042005 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 02:47:04.042112 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:47:04.063035 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 02:47:04.063138 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:47:04.071159 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 02:47:04.071249 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:47:04.082812 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 02:47:04.082923 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:47:04.089392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:47:04.089499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:47:04.098228 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 02:47:04.098343 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 02:47:04.098424 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 02:47:04.098512 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 02:47:04.099332 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 02:47:04.099510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 02:47:04.108459 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 02:47:04.108635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 02:47:04.112420 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 02:47:04.114843 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 02:47:04.157471 systemd[1]: Switching root. May 27 02:47:04.210781 systemd-journald[259]: Journal stopped May 27 02:47:07.284530 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). May 27 02:47:07.284662 kernel: SELinux: policy capability network_peer_controls=1 May 27 02:47:07.284704 kernel: SELinux: policy capability open_perms=1 May 27 02:47:07.284734 kernel: SELinux: policy capability extended_socket_class=1 May 27 02:47:07.284764 kernel: SELinux: policy capability always_check_network=0 May 27 02:47:07.284795 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 02:47:07.284824 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 02:47:07.284858 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 02:47:07.284886 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 02:47:07.284915 kernel: SELinux: policy capability userspace_initial_context=0 May 27 02:47:07.284942 kernel: audit: type=1403 audit(1748314024.621:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 02:47:07.290071 systemd[1]: Successfully loaded SELinux policy in 70.410ms. May 27 02:47:07.290155 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.601ms. May 27 02:47:07.290192 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:47:07.290222 systemd[1]: Detected virtualization amazon. May 27 02:47:07.290254 systemd[1]: Detected architecture arm64. May 27 02:47:07.290291 systemd[1]: Detected first boot. May 27 02:47:07.290324 systemd[1]: Initializing machine ID from VM UUID. May 27 02:47:07.290354 zram_generator::config[1447]: No configuration found. May 27 02:47:07.290388 kernel: NET: Registered PF_VSOCK protocol family May 27 02:47:07.290417 systemd[1]: Populated /etc with preset unit settings. May 27 02:47:07.290450 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 02:47:07.290478 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 02:47:07.290509 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 02:47:07.290542 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 02:47:07.290574 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 02:47:07.290622 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 02:47:07.290658 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 02:47:07.290689 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 02:47:07.290719 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 02:47:07.290750 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 02:47:07.290778 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 02:47:07.290810 systemd[1]: Created slice user.slice - User and Session Slice. May 27 02:47:07.290843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:47:07.290871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:47:07.290902 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 02:47:07.290933 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 02:47:07.295121 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 02:47:07.295195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:47:07.295228 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 02:47:07.295258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:47:07.295294 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:47:07.295324 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 02:47:07.295355 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 02:47:07.295386 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 02:47:07.295415 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 02:47:07.295445 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:47:07.295475 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:47:07.295505 systemd[1]: Reached target slices.target - Slice Units. May 27 02:47:07.295543 systemd[1]: Reached target swap.target - Swaps. May 27 02:47:07.295576 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 02:47:07.295604 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 02:47:07.295632 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 02:47:07.295662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:47:07.295690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:47:07.295719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:47:07.295748 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 02:47:07.295779 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 02:47:07.295808 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 02:47:07.295841 systemd[1]: Mounting media.mount - External Media Directory... May 27 02:47:07.295869 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 02:47:07.295896 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 02:47:07.295924 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 02:47:07.295981 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 02:47:07.296024 systemd[1]: Reached target machines.target - Containers. May 27 02:47:07.296054 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 02:47:07.296085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:07.296120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:47:07.296148 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 02:47:07.296179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:47:07.296210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:47:07.296238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:47:07.296266 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 02:47:07.296299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:47:07.296327 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 02:47:07.296360 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 02:47:07.296389 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 02:47:07.296416 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 02:47:07.296446 systemd[1]: Stopped systemd-fsck-usr.service. May 27 02:47:07.296478 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:07.296506 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:47:07.296535 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:47:07.296563 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:47:07.296595 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 02:47:07.296628 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 02:47:07.296661 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:47:07.296699 systemd[1]: verity-setup.service: Deactivated successfully. May 27 02:47:07.296729 kernel: ACPI: bus type drm_connector registered May 27 02:47:07.296757 systemd[1]: Stopped verity-setup.service. May 27 02:47:07.296785 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 02:47:07.300979 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 02:47:07.301036 systemd[1]: Mounted media.mount - External Media Directory. May 27 02:47:07.301069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 02:47:07.301099 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 02:47:07.301137 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 02:47:07.301169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:47:07.301197 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 02:47:07.304091 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 02:47:07.304140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:47:07.304171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:47:07.304200 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:47:07.304231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:47:07.304263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:47:07.304295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:47:07.304324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:47:07.304353 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 02:47:07.304381 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:47:07.304408 kernel: loop: module loaded May 27 02:47:07.304436 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 02:47:07.304466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 02:47:07.304496 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:47:07.307019 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 02:47:07.307101 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 02:47:07.307135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:07.307166 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 02:47:07.307197 kernel: fuse: init (API version 7.41) May 27 02:47:07.307228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:47:07.307256 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 02:47:07.307285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 02:47:07.307317 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:47:07.307345 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 02:47:07.307376 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 02:47:07.307404 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:47:07.307435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:47:07.307463 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:47:07.307495 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 02:47:07.307524 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 02:47:07.307552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:47:07.307583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:47:07.307613 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 02:47:07.307642 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 02:47:07.307715 systemd-journald[1523]: Collecting audit messages is disabled. May 27 02:47:07.307774 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 02:47:07.307810 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 02:47:07.307840 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 02:47:07.307870 kernel: loop0: detected capacity change from 0 to 107312 May 27 02:47:07.307896 systemd-journald[1523]: Journal started May 27 02:47:07.307940 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec28ddf0c94c2767ceffa89babf960b2) is 8M, max 75.3M, 67.3M free. May 27 02:47:05.880166 systemd[1]: Queued start job for default target multi-user.target. May 27 02:47:07.320051 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:47:05.894198 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 02:47:05.895061 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 02:47:07.328330 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 02:47:07.368452 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. May 27 02:47:07.368477 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. May 27 02:47:07.392875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 02:47:07.421047 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 02:47:07.432576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:47:07.447101 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 02:47:07.450834 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 02:47:07.462109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:47:07.477124 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec28ddf0c94c2767ceffa89babf960b2 is 74.441ms for 940 entries. May 27 02:47:07.477124 systemd-journald[1523]: System Journal (/var/log/journal/ec28ddf0c94c2767ceffa89babf960b2) is 8M, max 195.6M, 187.6M free. May 27 02:47:07.564488 systemd-journald[1523]: Received client request to flush runtime journal. May 27 02:47:07.564571 kernel: loop1: detected capacity change from 0 to 138376 May 27 02:47:07.570065 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 02:47:07.574088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:47:07.601084 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 02:47:07.614716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:47:07.647006 kernel: loop2: detected capacity change from 0 to 211168 May 27 02:47:07.666127 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. May 27 02:47:07.666167 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. May 27 02:47:07.678070 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:47:07.951470 kernel: loop3: detected capacity change from 0 to 61240 May 27 02:47:07.960366 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 02:47:08.003580 kernel: loop4: detected capacity change from 0 to 107312 May 27 02:47:08.026093 kernel: loop5: detected capacity change from 0 to 138376 May 27 02:47:08.073039 kernel: loop6: detected capacity change from 0 to 211168 May 27 02:47:08.102711 ldconfig[1543]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 02:47:08.107873 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 02:47:08.113002 kernel: loop7: detected capacity change from 0 to 61240 May 27 02:47:08.130138 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 27 02:47:08.131187 (sd-merge)[1603]: Merged extensions into '/usr'. May 27 02:47:08.139750 systemd[1]: Reload requested from client PID 1549 ('systemd-sysext') (unit systemd-sysext.service)... May 27 02:47:08.139784 systemd[1]: Reloading... May 27 02:47:08.296044 zram_generator::config[1629]: No configuration found. May 27 02:47:08.524574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:47:08.705750 systemd[1]: Reloading finished in 565 ms. May 27 02:47:08.743049 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 02:47:08.746536 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 02:47:08.761152 systemd[1]: Starting ensure-sysext.service... May 27 02:47:08.766287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:47:08.773361 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:47:08.803107 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... May 27 02:47:08.803139 systemd[1]: Reloading... May 27 02:47:08.825920 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 02:47:08.826026 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 02:47:08.826565 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 02:47:08.827125 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 02:47:08.828823 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 02:47:08.829433 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. May 27 02:47:08.829563 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. May 27 02:47:08.837102 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:47:08.837127 systemd-tmpfiles[1682]: Skipping /boot May 27 02:47:08.865585 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:47:08.865616 systemd-tmpfiles[1682]: Skipping /boot May 27 02:47:08.902368 systemd-udevd[1683]: Using default interface naming scheme 'v255'. May 27 02:47:09.045992 zram_generator::config[1730]: No configuration found. May 27 02:47:09.247393 (udev-worker)[1723]: Network interface NamePolicy= disabled on kernel command line. May 27 02:47:09.398874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:47:09.669624 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 02:47:09.670407 systemd[1]: Reloading finished in 866 ms. May 27 02:47:09.703837 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:47:09.729523 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:47:09.776654 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:47:09.785313 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 02:47:09.793402 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 02:47:09.803420 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:47:09.813388 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:47:09.823348 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 02:47:09.842517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:09.845003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:47:09.856849 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:47:09.866298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:47:09.868551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:09.868813 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:09.874269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:09.875143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:09.875401 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:09.881577 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 02:47:09.892766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:09.898569 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:47:09.900776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:09.901046 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:09.901401 systemd[1]: Reached target time-set.target - System Time Set. May 27 02:47:09.914055 systemd[1]: Finished ensure-sysext.service. May 27 02:47:09.980218 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 02:47:09.984623 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 02:47:09.989927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:47:09.990382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:47:10.004916 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 02:47:10.055913 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:47:10.056393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:47:10.068269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:47:10.070119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:47:10.073700 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:47:10.074155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:47:10.076810 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:47:10.076926 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:47:10.095746 augenrules[1932]: No rules May 27 02:47:10.098908 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:47:10.099429 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:47:10.103105 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 02:47:10.106799 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 02:47:10.148397 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 02:47:10.162655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:47:10.213553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 02:47:10.236329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 02:47:10.302097 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 02:47:10.355826 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 02:47:10.376400 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:47:10.509661 systemd-networkd[1890]: lo: Link UP May 27 02:47:10.509685 systemd-networkd[1890]: lo: Gained carrier May 27 02:47:10.512634 systemd-networkd[1890]: Enumeration completed May 27 02:47:10.512825 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:47:10.516822 systemd-networkd[1890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:47:10.516848 systemd-networkd[1890]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:47:10.519582 systemd-resolved[1892]: Positive Trust Anchors: May 27 02:47:10.520215 systemd-resolved[1892]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:47:10.520216 systemd-networkd[1890]: eth0: Link UP May 27 02:47:10.520285 systemd-resolved[1892]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:47:10.520489 systemd-networkd[1890]: eth0: Gained carrier May 27 02:47:10.520524 systemd-networkd[1890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:47:10.521066 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 02:47:10.529102 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 02:47:10.530516 systemd-resolved[1892]: Defaulting to hostname 'linux'. May 27 02:47:10.534043 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:47:10.537426 systemd[1]: Reached target network.target - Network. May 27 02:47:10.540237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:47:10.551485 systemd-networkd[1890]: eth0: DHCPv4 address 172.31.27.90/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 02:47:10.554055 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:47:10.556281 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 02:47:10.558685 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 02:47:10.571138 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 02:47:10.573816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 02:47:10.576810 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 02:47:10.579732 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 02:47:10.579900 systemd[1]: Reached target paths.target - Path Units. May 27 02:47:10.582075 systemd[1]: Reached target timers.target - Timer Units. May 27 02:47:10.586658 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 02:47:10.591767 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 02:47:10.599166 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 02:47:10.602163 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 02:47:10.605566 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 02:47:10.611164 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 02:47:10.614492 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 02:47:10.618066 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 02:47:10.621103 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:47:10.623147 systemd[1]: Reached target basic.target - Basic System. May 27 02:47:10.626243 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 02:47:10.626298 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 02:47:10.631148 systemd[1]: Starting containerd.service - containerd container runtime... May 27 02:47:10.641358 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 02:47:10.652650 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 02:47:10.659450 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 02:47:10.671277 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 02:47:10.680791 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 02:47:10.684792 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 02:47:10.690238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 02:47:10.699641 systemd[1]: Started ntpd.service - Network Time Service. May 27 02:47:10.709357 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 02:47:10.722875 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 02:47:10.732786 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 02:47:10.753487 jq[1967]: false May 27 02:47:10.754260 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 02:47:10.770429 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 02:47:10.778279 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 02:47:10.779177 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 02:47:10.784347 systemd[1]: Starting update-engine.service - Update Engine... May 27 02:47:10.792793 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 02:47:10.803076 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 02:47:10.818038 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 02:47:10.823708 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 02:47:10.824162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 02:47:10.878490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 02:47:10.879053 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 02:47:10.916497 systemd[1]: motdgen.service: Deactivated successfully. May 27 02:47:10.919117 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 02:47:10.948477 extend-filesystems[1968]: Found loop4 May 27 02:47:10.964425 extend-filesystems[1968]: Found loop5 May 27 02:47:10.964425 extend-filesystems[1968]: Found loop6 May 27 02:47:10.964425 extend-filesystems[1968]: Found loop7 May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1 May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1p1 May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1p2 May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1p3 May 27 02:47:10.964425 extend-filesystems[1968]: Found usr May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1p4 May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1p6 May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1p7 May 27 02:47:10.964425 extend-filesystems[1968]: Found nvme0n1p9 May 27 02:47:10.964425 extend-filesystems[1968]: Checking size of /dev/nvme0n1p9 May 27 02:47:11.043659 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 27 02:47:11.043747 jq[1981]: true May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:38:41 UTC 2025 (1): Starting May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: ---------------------------------------------------- May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: corporation. Support and training for ntp-4 are May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: available at https://www.nwtime.org/support May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: ---------------------------------------------------- May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: proto: precision = 0.096 usec (-23) May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: basedate set to 2025-05-15 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:10 ntpd[1970]: gps base set to 2025-05-18 (week 2367) May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: Listen normally on 3 eth0 172.31.27.90:123 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: Listen normally on 4 lo [::1]:123 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: bind(21) AF_INET6 fe80::404:c7ff:fe73:1eb1%2#123 flags 0x11 failed: Cannot assign requested address May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: unable to create socket on eth0 (5) for fe80::404:c7ff:fe73:1eb1%2#123 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: failed to init interface for address fe80::404:c7ff:fe73:1eb1%2 May 27 02:47:11.043932 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: Listening on routing socket on fd #21 for interface updates May 27 02:47:10.956804 ntpd[1970]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:38:41 UTC 2025 (1): Starting May 27 02:47:10.978745 (ntainerd)[2001]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 02:47:11.045872 tar[1986]: linux-arm64/LICENSE May 27 02:47:11.045872 tar[1986]: linux-arm64/helm May 27 02:47:11.046275 extend-filesystems[1968]: Resized partition /dev/nvme0n1p9 May 27 02:47:11.050735 update_engine[1978]: I20250527 02:47:10.968639 1978 main.cc:92] Flatcar Update Engine starting May 27 02:47:10.956850 ntpd[1970]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 02:47:11.051589 extend-filesystems[2014]: resize2fs 1.47.2 (1-Jan-2025) May 27 02:47:10.956869 ntpd[1970]: ---------------------------------------------------- May 27 02:47:11.054686 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 02:47:10.956886 ntpd[1970]: ntp-4 is maintained by Network Time Foundation, May 27 02:47:10.956903 ntpd[1970]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 02:47:10.956919 ntpd[1970]: corporation. Support and training for ntp-4 are May 27 02:47:10.956937 ntpd[1970]: available at https://www.nwtime.org/support May 27 02:47:10.956953 ntpd[1970]: ---------------------------------------------------- May 27 02:47:10.971162 ntpd[1970]: proto: precision = 0.096 usec (-23) May 27 02:47:10.980397 ntpd[1970]: basedate set to 2025-05-15 May 27 02:47:10.980431 ntpd[1970]: gps base set to 2025-05-18 (week 2367) May 27 02:47:11.002113 ntpd[1970]: Listen and drop on 0 v6wildcard [::]:123 May 27 02:47:11.002196 ntpd[1970]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 02:47:11.002452 ntpd[1970]: Listen normally on 2 lo 127.0.0.1:123 May 27 02:47:11.002513 ntpd[1970]: Listen normally on 3 eth0 172.31.27.90:123 May 27 02:47:11.002579 ntpd[1970]: Listen normally on 4 lo [::1]:123 May 27 02:47:11.002668 ntpd[1970]: bind(21) AF_INET6 fe80::404:c7ff:fe73:1eb1%2#123 flags 0x11 failed: Cannot assign requested address May 27 02:47:11.002707 ntpd[1970]: unable to create socket on eth0 (5) for fe80::404:c7ff:fe73:1eb1%2#123 May 27 02:47:11.002735 ntpd[1970]: failed to init interface for address fe80::404:c7ff:fe73:1eb1%2 May 27 02:47:11.002796 ntpd[1970]: Listening on routing socket on fd #21 for interface updates May 27 02:47:11.054400 dbus-daemon[1965]: [system] SELinux support is enabled May 27 02:47:11.067057 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 02:47:11.067171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 02:47:11.076910 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:11.076910 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:11.074347 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:11.072239 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 02:47:11.074395 ntpd[1970]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:11.072281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 02:47:11.100030 jq[2008]: true May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.109 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetch successful May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetch successful May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetch successful May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetch successful May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetch failed with 404: resource not found May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetch successful May 27 02:47:11.110322 coreos-metadata[1964]: May 27 02:47:11.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 27 02:47:11.113250 coreos-metadata[1964]: May 27 02:47:11.112 INFO Fetch successful May 27 02:47:11.113250 coreos-metadata[1964]: May 27 02:47:11.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 27 02:47:11.115437 coreos-metadata[1964]: May 27 02:47:11.115 INFO Fetch successful May 27 02:47:11.115437 coreos-metadata[1964]: May 27 02:47:11.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 27 02:47:11.115437 coreos-metadata[1964]: May 27 02:47:11.115 INFO Fetch successful May 27 02:47:11.115437 coreos-metadata[1964]: May 27 02:47:11.115 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 27 02:47:11.116606 coreos-metadata[1964]: May 27 02:47:11.116 INFO Fetch successful May 27 02:47:11.131414 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1890 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 02:47:11.145799 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 27 02:47:11.142066 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 02:47:11.162238 update_engine[1978]: I20250527 02:47:11.151377 1978 update_check_scheduler.cc:74] Next update check in 6m35s May 27 02:47:11.166381 systemd[1]: Started update-engine.service - Update Engine. May 27 02:47:11.171889 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 02:47:11.175652 extend-filesystems[2014]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 27 02:47:11.175652 extend-filesystems[2014]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 02:47:11.175652 extend-filesystems[2014]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 27 02:47:11.208116 extend-filesystems[1968]: Resized filesystem in /dev/nvme0n1p9 May 27 02:47:11.182125 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 02:47:11.260692 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 02:47:11.263887 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 02:47:11.264366 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 02:47:11.331464 systemd-logind[1977]: Watching system buttons on /dev/input/event0 (Power Button) May 27 02:47:11.331516 systemd-logind[1977]: Watching system buttons on /dev/input/event1 (Sleep Button) May 27 02:47:11.340007 systemd-logind[1977]: New seat seat0. May 27 02:47:11.342937 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 02:47:11.346498 systemd[1]: Started systemd-logind.service - User Login Management. May 27 02:47:11.349027 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 02:47:11.403726 bash[2059]: Updated "/home/core/.ssh/authorized_keys" May 27 02:47:11.406490 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 02:47:11.415434 systemd[1]: Starting sshkeys.service... May 27 02:47:11.495582 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 02:47:11.508130 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 02:47:11.677158 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 02:47:11.821745 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 02:47:11.827236 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 02:47:11.831108 dbus-daemon[1965]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 02:47:11.843929 systemd[1]: Starting polkit.service - Authorization Manager... May 27 02:47:11.960095 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: bind(24) AF_INET6 fe80::404:c7ff:fe73:1eb1%2#123 flags 0x11 failed: Cannot assign requested address May 27 02:47:11.960095 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: unable to create socket on eth0 (6) for fe80::404:c7ff:fe73:1eb1%2#123 May 27 02:47:11.960095 ntpd[1970]: 27 May 02:47:11 ntpd[1970]: failed to init interface for address fe80::404:c7ff:fe73:1eb1%2 May 27 02:47:11.960311 coreos-metadata[2088]: May 27 02:47:11.959 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 02:47:11.957546 ntpd[1970]: bind(24) AF_INET6 fe80::404:c7ff:fe73:1eb1%2#123 flags 0x11 failed: Cannot assign requested address May 27 02:47:11.957603 ntpd[1970]: unable to create socket on eth0 (6) for fe80::404:c7ff:fe73:1eb1%2#123 May 27 02:47:11.957630 ntpd[1970]: failed to init interface for address fe80::404:c7ff:fe73:1eb1%2 May 27 02:47:11.965014 coreos-metadata[2088]: May 27 02:47:11.964 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 27 02:47:11.969533 coreos-metadata[2088]: May 27 02:47:11.969 INFO Fetch successful May 27 02:47:11.969533 coreos-metadata[2088]: May 27 02:47:11.969 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 02:47:11.976994 coreos-metadata[2088]: May 27 02:47:11.975 INFO Fetch successful May 27 02:47:11.980911 unknown[2088]: wrote ssh authorized keys file for user: core May 27 02:47:12.048060 containerd[2001]: time="2025-05-27T02:47:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 02:47:12.059092 containerd[2001]: time="2025-05-27T02:47:12.057576864Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 02:47:12.073668 update-ssh-keys[2155]: Updated "/home/core/.ssh/authorized_keys" May 27 02:47:12.077040 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 02:47:12.098681 systemd[1]: Finished sshkeys.service. May 27 02:47:12.141773 containerd[2001]: time="2025-05-27T02:47:12.141707700Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.08µs" May 27 02:47:12.144124 containerd[2001]: time="2025-05-27T02:47:12.143912316Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 02:47:12.144124 containerd[2001]: time="2025-05-27T02:47:12.144039744Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 02:47:12.144913 containerd[2001]: time="2025-05-27T02:47:12.144874068Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 02:47:12.145763 containerd[2001]: time="2025-05-27T02:47:12.145731300Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147032256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147178248Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147203700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147590724Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147618624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147646284Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147675444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 02:47:12.147991 containerd[2001]: time="2025-05-27T02:47:12.147827508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 02:47:12.150307 containerd[2001]: time="2025-05-27T02:47:12.150263988Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:47:12.151179 containerd[2001]: time="2025-05-27T02:47:12.151062456Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:47:12.151587 containerd[2001]: time="2025-05-27T02:47:12.151555812Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 02:47:12.157193 containerd[2001]: time="2025-05-27T02:47:12.154659396Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 02:47:12.157193 containerd[2001]: time="2025-05-27T02:47:12.155151048Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 02:47:12.157193 containerd[2001]: time="2025-05-27T02:47:12.155307612Z" level=info msg="metadata content store policy set" policy=shared May 27 02:47:12.170939 containerd[2001]: time="2025-05-27T02:47:12.170869572Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 02:47:12.171070 containerd[2001]: time="2025-05-27T02:47:12.171001968Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 02:47:12.171136 containerd[2001]: time="2025-05-27T02:47:12.171105912Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 02:47:12.171181 containerd[2001]: time="2025-05-27T02:47:12.171141036Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 02:47:12.171181 containerd[2001]: time="2025-05-27T02:47:12.171171084Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 02:47:12.171287 containerd[2001]: time="2025-05-27T02:47:12.171203544Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 02:47:12.171287 containerd[2001]: time="2025-05-27T02:47:12.171235272Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 02:47:12.171287 containerd[2001]: time="2025-05-27T02:47:12.171264216Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 02:47:12.171403 containerd[2001]: time="2025-05-27T02:47:12.171292152Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 02:47:12.171403 containerd[2001]: time="2025-05-27T02:47:12.171317844Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 02:47:12.171403 containerd[2001]: time="2025-05-27T02:47:12.171341748Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 02:47:12.171403 containerd[2001]: time="2025-05-27T02:47:12.171370452Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 02:47:12.171647 containerd[2001]: time="2025-05-27T02:47:12.171605916Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 02:47:12.171702 containerd[2001]: time="2025-05-27T02:47:12.171655380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 02:47:12.171746 containerd[2001]: time="2025-05-27T02:47:12.171691440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 02:47:12.171746 containerd[2001]: time="2025-05-27T02:47:12.171719424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 02:47:12.171823 containerd[2001]: time="2025-05-27T02:47:12.171748092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 02:47:12.171823 containerd[2001]: time="2025-05-27T02:47:12.171781032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 02:47:12.171823 containerd[2001]: time="2025-05-27T02:47:12.171810468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 02:47:12.171956 containerd[2001]: time="2025-05-27T02:47:12.171836064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 02:47:12.171956 containerd[2001]: time="2025-05-27T02:47:12.171865140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 02:47:12.171956 containerd[2001]: time="2025-05-27T02:47:12.171890784Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 02:47:12.171956 containerd[2001]: time="2025-05-27T02:47:12.171924708Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 02:47:12.172190 containerd[2001]: time="2025-05-27T02:47:12.172100676Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 02:47:12.172190 containerd[2001]: time="2025-05-27T02:47:12.172134600Z" level=info msg="Start snapshots syncer" May 27 02:47:12.172272 containerd[2001]: time="2025-05-27T02:47:12.172195272Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 02:47:12.172740 containerd[2001]: time="2025-05-27T02:47:12.172667004Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 02:47:12.172931 containerd[2001]: time="2025-05-27T02:47:12.172763268Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176430468Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176694720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176737128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176764692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176793432Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176822856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176851152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176877960Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176930508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 02:47:12.177002 containerd[2001]: time="2025-05-27T02:47:12.176958444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177019932Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177107196Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177139848Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177240300Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177266628Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177288612Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177321444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177347820Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177399864Z" level=info msg="runtime interface created" May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177415320Z" level=info msg="created NRI interface" May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177435144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177463848Z" level=info msg="Connect containerd service" May 27 02:47:12.177522 containerd[2001]: time="2025-05-27T02:47:12.177514344Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 02:47:12.184712 containerd[2001]: time="2025-05-27T02:47:12.184184892Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:47:12.190197 systemd-networkd[1890]: eth0: Gained IPv6LL May 27 02:47:12.202770 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 02:47:12.206709 systemd[1]: Reached target network-online.target - Network is Online. May 27 02:47:12.213601 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 27 02:47:12.221417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:12.231558 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 02:47:12.388777 polkitd[2146]: Started polkitd version 126 May 27 02:47:12.405124 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 02:47:12.426294 polkitd[2146]: Loading rules from directory /etc/polkit-1/rules.d May 27 02:47:12.426933 polkitd[2146]: Loading rules from directory /run/polkit-1/rules.d May 27 02:47:12.428309 polkitd[2146]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 02:47:12.428865 polkitd[2146]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 02:47:12.428925 polkitd[2146]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 02:47:12.429037 polkitd[2146]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 02:47:12.434541 polkitd[2146]: Finished loading, compiling and executing 2 rules May 27 02:47:12.435268 systemd[1]: Started polkit.service - Authorization Manager. May 27 02:47:12.444270 dbus-daemon[1965]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 02:47:12.445062 polkitd[2146]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 02:47:12.497057 containerd[2001]: time="2025-05-27T02:47:12.496804934Z" level=info msg="Start subscribing containerd event" May 27 02:47:12.497057 containerd[2001]: time="2025-05-27T02:47:12.496925306Z" level=info msg="Start recovering state" May 27 02:47:12.497209 containerd[2001]: time="2025-05-27T02:47:12.497171558Z" level=info msg="Start event monitor" May 27 02:47:12.497256 containerd[2001]: time="2025-05-27T02:47:12.497223398Z" level=info msg="Start cni network conf syncer for default" May 27 02:47:12.497256 containerd[2001]: time="2025-05-27T02:47:12.497245694Z" level=info msg="Start streaming server" May 27 02:47:12.497337 containerd[2001]: time="2025-05-27T02:47:12.497265242Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 02:47:12.497337 containerd[2001]: time="2025-05-27T02:47:12.497282078Z" level=info msg="runtime interface starting up..." May 27 02:47:12.497337 containerd[2001]: time="2025-05-27T02:47:12.497321582Z" level=info msg="starting plugins..." May 27 02:47:12.497479 containerd[2001]: time="2025-05-27T02:47:12.497350550Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 02:47:12.500013 containerd[2001]: time="2025-05-27T02:47:12.499301582Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 02:47:12.500013 containerd[2001]: time="2025-05-27T02:47:12.499479302Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 02:47:12.502289 containerd[2001]: time="2025-05-27T02:47:12.502243970Z" level=info msg="containerd successfully booted in 0.459404s" May 27 02:47:12.502368 systemd[1]: Started containerd.service - containerd container runtime. May 27 02:47:12.512800 amazon-ssm-agent[2167]: Initializing new seelog logger May 27 02:47:12.513301 amazon-ssm-agent[2167]: New Seelog Logger Creation Complete May 27 02:47:12.513527 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.513879 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.515746 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 processing appconfig overrides May 27 02:47:12.517090 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.520027 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.520027 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 processing appconfig overrides May 27 02:47:12.520027 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.520027 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.520027 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 processing appconfig overrides May 27 02:47:12.520543 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.5169 INFO Proxy environment variables: May 27 02:47:12.523808 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.523808 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:12.523959 amazon-ssm-agent[2167]: 2025/05/27 02:47:12 processing appconfig overrides May 27 02:47:12.526423 systemd-hostnamed[2023]: Hostname set to (transient) May 27 02:47:12.527056 systemd-resolved[1892]: System hostname changed to 'ip-172-31-27-90'. May 27 02:47:12.628070 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.5170 INFO https_proxy: May 27 02:47:12.727580 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.5170 INFO http_proxy: May 27 02:47:12.825863 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.5170 INFO no_proxy: May 27 02:47:12.924986 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.5191 INFO Checking if agent identity type OnPrem can be assumed May 27 02:47:13.025838 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.5192 INFO Checking if agent identity type EC2 can be assumed May 27 02:47:13.125050 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6491 INFO Agent will take identity from EC2 May 27 02:47:13.127076 tar[1986]: linux-arm64/README.md May 27 02:47:13.176080 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 02:47:13.224149 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6561 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 May 27 02:47:13.323053 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6562 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 27 02:47:13.423026 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6562 INFO [amazon-ssm-agent] Starting Core Agent May 27 02:47:13.522551 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6562 INFO [amazon-ssm-agent] Registrar detected. Attempting registration May 27 02:47:13.623620 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6562 INFO [Registrar] Starting registrar module May 27 02:47:13.723251 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6681 INFO [EC2Identity] Checking disk for registration info May 27 02:47:13.793168 amazon-ssm-agent[2167]: 2025/05/27 02:47:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:13.793347 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:13.793701 amazon-ssm-agent[2167]: 2025/05/27 02:47:13 processing appconfig overrides May 27 02:47:13.821166 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6682 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration May 27 02:47:13.821166 amazon-ssm-agent[2167]: 2025-05-27 02:47:12.6682 INFO [EC2Identity] Generating registration keypair May 27 02:47:13.821166 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.7451 INFO [EC2Identity] Checking write access before registering May 27 02:47:13.821416 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.7458 INFO [EC2Identity] Registering EC2 instance with Systems Manager May 27 02:47:13.821416 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.7929 INFO [EC2Identity] EC2 registration was successful. May 27 02:47:13.821416 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.7929 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. May 27 02:47:13.821416 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.7930 INFO [CredentialRefresher] credentialRefresher has started May 27 02:47:13.821416 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.7930 INFO [CredentialRefresher] Starting credentials refresher loop May 27 02:47:13.821416 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.8207 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 27 02:47:13.821416 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.8210 INFO [CredentialRefresher] Credentials ready May 27 02:47:13.823761 amazon-ssm-agent[2167]: 2025-05-27 02:47:13.8213 INFO [CredentialRefresher] Next credential rotation will be in 29.9999901485 minutes May 27 02:47:14.521766 sshd_keygen[1996]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 02:47:14.561050 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 02:47:14.567216 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 02:47:14.573466 systemd[1]: Started sshd@0-172.31.27.90:22-139.178.68.195:34118.service - OpenSSH per-connection server daemon (139.178.68.195:34118). May 27 02:47:14.601906 systemd[1]: issuegen.service: Deactivated successfully. May 27 02:47:14.602433 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 02:47:14.611434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 02:47:14.643110 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 02:47:14.649230 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 02:47:14.655693 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 02:47:14.659765 systemd[1]: Reached target getty.target - Login Prompts. May 27 02:47:14.832421 sshd[2217]: Accepted publickey for core from 139.178.68.195 port 34118 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:14.835900 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:14.849559 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 02:47:14.855516 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 02:47:14.874067 amazon-ssm-agent[2167]: 2025-05-27 02:47:14.8724 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 27 02:47:14.887949 systemd-logind[1977]: New session 1 of user core. May 27 02:47:14.903924 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 02:47:14.915628 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 02:47:14.943687 (systemd)[2231]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 02:47:14.949555 systemd-logind[1977]: New session c1 of user core. May 27 02:47:14.957556 ntpd[1970]: Listen normally on 7 eth0 [fe80::404:c7ff:fe73:1eb1%2]:123 May 27 02:47:14.958184 ntpd[1970]: 27 May 02:47:14 ntpd[1970]: Listen normally on 7 eth0 [fe80::404:c7ff:fe73:1eb1%2]:123 May 27 02:47:14.974154 amazon-ssm-agent[2167]: 2025-05-27 02:47:14.8938 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2229) started May 27 02:47:15.073514 amazon-ssm-agent[2167]: 2025-05-27 02:47:14.8938 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 27 02:47:15.284518 systemd[2231]: Queued start job for default target default.target. May 27 02:47:15.294746 systemd[2231]: Created slice app.slice - User Application Slice. May 27 02:47:15.295024 systemd[2231]: Reached target paths.target - Paths. May 27 02:47:15.295114 systemd[2231]: Reached target timers.target - Timers. May 27 02:47:15.299770 systemd[2231]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 02:47:15.334126 systemd[2231]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 02:47:15.334658 systemd[2231]: Reached target sockets.target - Sockets. May 27 02:47:15.334950 systemd[2231]: Reached target basic.target - Basic System. May 27 02:47:15.335200 systemd[2231]: Reached target default.target - Main User Target. May 27 02:47:15.335227 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 02:47:15.337738 systemd[2231]: Startup finished in 372ms. May 27 02:47:15.346053 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 02:47:15.508468 systemd[1]: Started sshd@1-172.31.27.90:22-139.178.68.195:40996.service - OpenSSH per-connection server daemon (139.178.68.195:40996). May 27 02:47:15.710430 sshd[2253]: Accepted publickey for core from 139.178.68.195 port 40996 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:15.713443 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:15.721508 systemd-logind[1977]: New session 2 of user core. May 27 02:47:15.731198 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 02:47:15.859087 sshd[2255]: Connection closed by 139.178.68.195 port 40996 May 27 02:47:15.860268 sshd-session[2253]: pam_unix(sshd:session): session closed for user core May 27 02:47:15.869105 systemd-logind[1977]: Session 2 logged out. Waiting for processes to exit. May 27 02:47:15.869512 systemd[1]: sshd@1-172.31.27.90:22-139.178.68.195:40996.service: Deactivated successfully. May 27 02:47:15.872815 systemd[1]: session-2.scope: Deactivated successfully. May 27 02:47:15.878324 systemd-logind[1977]: Removed session 2. May 27 02:47:15.898219 systemd[1]: Started sshd@2-172.31.27.90:22-139.178.68.195:41010.service - OpenSSH per-connection server daemon (139.178.68.195:41010). May 27 02:47:16.095669 sshd[2261]: Accepted publickey for core from 139.178.68.195 port 41010 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:16.098495 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:16.107058 systemd-logind[1977]: New session 3 of user core. May 27 02:47:16.121302 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 02:47:16.252005 sshd[2263]: Connection closed by 139.178.68.195 port 41010 May 27 02:47:16.251132 sshd-session[2261]: pam_unix(sshd:session): session closed for user core May 27 02:47:16.258210 systemd[1]: sshd@2-172.31.27.90:22-139.178.68.195:41010.service: Deactivated successfully. May 27 02:47:16.261930 systemd[1]: session-3.scope: Deactivated successfully. May 27 02:47:16.268845 systemd-logind[1977]: Session 3 logged out. Waiting for processes to exit. May 27 02:47:16.271817 systemd-logind[1977]: Removed session 3. May 27 02:47:16.569166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:16.573243 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 02:47:16.577329 systemd[1]: Startup finished in 3.740s (kernel) + 10.926s (initrd) + 12.023s (userspace) = 26.690s. May 27 02:47:16.590135 (kubelet)[2273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:47:17.606050 systemd-resolved[1892]: Clock change detected. Flushing caches. May 27 02:47:18.133328 kubelet[2273]: E0527 02:47:18.133238 2273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:47:18.138086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:47:18.138405 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:47:18.140961 systemd[1]: kubelet.service: Consumed 1.410s CPU time, 256.9M memory peak. May 27 02:47:25.893713 systemd[1]: Started sshd@3-172.31.27.90:22-139.178.68.195:54950.service - OpenSSH per-connection server daemon (139.178.68.195:54950). May 27 02:47:26.097872 sshd[2285]: Accepted publickey for core from 139.178.68.195 port 54950 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:26.101256 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:26.110787 systemd-logind[1977]: New session 4 of user core. May 27 02:47:26.120157 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 02:47:26.246902 sshd[2287]: Connection closed by 139.178.68.195 port 54950 May 27 02:47:26.247730 sshd-session[2285]: pam_unix(sshd:session): session closed for user core May 27 02:47:26.255181 systemd[1]: sshd@3-172.31.27.90:22-139.178.68.195:54950.service: Deactivated successfully. May 27 02:47:26.259581 systemd[1]: session-4.scope: Deactivated successfully. May 27 02:47:26.263289 systemd-logind[1977]: Session 4 logged out. Waiting for processes to exit. May 27 02:47:26.266138 systemd-logind[1977]: Removed session 4. May 27 02:47:26.281235 systemd[1]: Started sshd@4-172.31.27.90:22-139.178.68.195:54966.service - OpenSSH per-connection server daemon (139.178.68.195:54966). May 27 02:47:26.477050 sshd[2293]: Accepted publickey for core from 139.178.68.195 port 54966 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:26.479449 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:26.487497 systemd-logind[1977]: New session 5 of user core. May 27 02:47:26.508157 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 02:47:26.626497 sshd[2295]: Connection closed by 139.178.68.195 port 54966 May 27 02:47:26.627464 sshd-session[2293]: pam_unix(sshd:session): session closed for user core May 27 02:47:26.633355 systemd[1]: session-5.scope: Deactivated successfully. May 27 02:47:26.633545 systemd-logind[1977]: Session 5 logged out. Waiting for processes to exit. May 27 02:47:26.635727 systemd[1]: sshd@4-172.31.27.90:22-139.178.68.195:54966.service: Deactivated successfully. May 27 02:47:26.642521 systemd-logind[1977]: Removed session 5. May 27 02:47:26.662584 systemd[1]: Started sshd@5-172.31.27.90:22-139.178.68.195:54968.service - OpenSSH per-connection server daemon (139.178.68.195:54968). May 27 02:47:26.882112 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 54968 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:26.884529 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:26.892593 systemd-logind[1977]: New session 6 of user core. May 27 02:47:26.901122 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 02:47:27.028862 sshd[2303]: Connection closed by 139.178.68.195 port 54968 May 27 02:47:27.029319 sshd-session[2301]: pam_unix(sshd:session): session closed for user core May 27 02:47:27.037271 systemd[1]: sshd@5-172.31.27.90:22-139.178.68.195:54968.service: Deactivated successfully. May 27 02:47:27.040804 systemd[1]: session-6.scope: Deactivated successfully. May 27 02:47:27.043130 systemd-logind[1977]: Session 6 logged out. Waiting for processes to exit. May 27 02:47:27.046869 systemd-logind[1977]: Removed session 6. May 27 02:47:27.076102 systemd[1]: Started sshd@6-172.31.27.90:22-139.178.68.195:54974.service - OpenSSH per-connection server daemon (139.178.68.195:54974). May 27 02:47:27.273226 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 54974 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:27.275686 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:27.284045 systemd-logind[1977]: New session 7 of user core. May 27 02:47:27.300119 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 02:47:27.418252 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 02:47:27.419404 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:27.438343 sudo[2312]: pam_unix(sudo:session): session closed for user root May 27 02:47:27.462360 sshd[2311]: Connection closed by 139.178.68.195 port 54974 May 27 02:47:27.463525 sshd-session[2309]: pam_unix(sshd:session): session closed for user core May 27 02:47:27.471625 systemd[1]: sshd@6-172.31.27.90:22-139.178.68.195:54974.service: Deactivated successfully. May 27 02:47:27.474805 systemd[1]: session-7.scope: Deactivated successfully. May 27 02:47:27.476789 systemd-logind[1977]: Session 7 logged out. Waiting for processes to exit. May 27 02:47:27.479988 systemd-logind[1977]: Removed session 7. May 27 02:47:27.500813 systemd[1]: Started sshd@7-172.31.27.90:22-139.178.68.195:54982.service - OpenSSH per-connection server daemon (139.178.68.195:54982). May 27 02:47:27.706095 sshd[2318]: Accepted publickey for core from 139.178.68.195 port 54982 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:27.709322 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:27.717370 systemd-logind[1977]: New session 8 of user core. May 27 02:47:27.738157 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 02:47:27.842482 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 02:47:27.843197 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:27.850789 sudo[2322]: pam_unix(sudo:session): session closed for user root May 27 02:47:27.861033 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 02:47:27.861660 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:27.879235 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:47:27.958596 augenrules[2344]: No rules May 27 02:47:27.961312 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:47:27.962976 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:47:27.965630 sudo[2321]: pam_unix(sudo:session): session closed for user root May 27 02:47:27.989789 sshd[2320]: Connection closed by 139.178.68.195 port 54982 May 27 02:47:27.990640 sshd-session[2318]: pam_unix(sshd:session): session closed for user core May 27 02:47:27.997935 systemd[1]: sshd@7-172.31.27.90:22-139.178.68.195:54982.service: Deactivated successfully. May 27 02:47:28.001547 systemd[1]: session-8.scope: Deactivated successfully. May 27 02:47:28.003584 systemd-logind[1977]: Session 8 logged out. Waiting for processes to exit. May 27 02:47:28.006344 systemd-logind[1977]: Removed session 8. May 27 02:47:28.027465 systemd[1]: Started sshd@8-172.31.27.90:22-139.178.68.195:54984.service - OpenSSH per-connection server daemon (139.178.68.195:54984). May 27 02:47:28.148352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 02:47:28.151242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:28.226023 sshd[2353]: Accepted publickey for core from 139.178.68.195 port 54984 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:28.228664 sshd-session[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:28.241917 systemd-logind[1977]: New session 9 of user core. May 27 02:47:28.250191 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 02:47:28.357292 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 02:47:28.358122 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:28.563142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:28.578432 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:47:28.671151 kubelet[2377]: E0527 02:47:28.671067 2377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:47:28.679223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:47:28.679776 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:47:28.682039 systemd[1]: kubelet.service: Consumed 331ms CPU time, 107.8M memory peak. May 27 02:47:28.911894 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 02:47:28.943408 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 02:47:29.298905 dockerd[2389]: time="2025-05-27T02:47:29.296822274Z" level=info msg="Starting up" May 27 02:47:29.301661 dockerd[2389]: time="2025-05-27T02:47:29.301608030Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 02:47:29.410327 dockerd[2389]: time="2025-05-27T02:47:29.410068542Z" level=info msg="Loading containers: start." May 27 02:47:29.424888 kernel: Initializing XFRM netlink socket May 27 02:47:29.747593 (udev-worker)[2411]: Network interface NamePolicy= disabled on kernel command line. May 27 02:47:29.821000 systemd-networkd[1890]: docker0: Link UP May 27 02:47:29.832942 dockerd[2389]: time="2025-05-27T02:47:29.832003760Z" level=info msg="Loading containers: done." May 27 02:47:29.856689 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck255590593-merged.mount: Deactivated successfully. May 27 02:47:29.866099 dockerd[2389]: time="2025-05-27T02:47:29.866016741Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 02:47:29.866341 dockerd[2389]: time="2025-05-27T02:47:29.866153205Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 02:47:29.866403 dockerd[2389]: time="2025-05-27T02:47:29.866357505Z" level=info msg="Initializing buildkit" May 27 02:47:29.917598 dockerd[2389]: time="2025-05-27T02:47:29.917529741Z" level=info msg="Completed buildkit initialization" May 27 02:47:29.933868 dockerd[2389]: time="2025-05-27T02:47:29.933098973Z" level=info msg="Daemon has completed initialization" May 27 02:47:29.933909 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 02:47:29.934548 dockerd[2389]: time="2025-05-27T02:47:29.933759009Z" level=info msg="API listen on /run/docker.sock" May 27 02:47:30.832735 containerd[2001]: time="2025-05-27T02:47:30.832670361Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 02:47:31.488405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496279535.mount: Deactivated successfully. May 27 02:47:32.924878 containerd[2001]: time="2025-05-27T02:47:32.924214836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:32.927097 containerd[2001]: time="2025-05-27T02:47:32.927046656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349350" May 27 02:47:32.929558 containerd[2001]: time="2025-05-27T02:47:32.929503152Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:32.935036 containerd[2001]: time="2025-05-27T02:47:32.934946568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:32.937215 containerd[2001]: time="2025-05-27T02:47:32.936941088Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 2.103576863s" May 27 02:47:32.937215 containerd[2001]: time="2025-05-27T02:47:32.937001352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 27 02:47:32.939975 containerd[2001]: time="2025-05-27T02:47:32.939927480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 02:47:34.511897 containerd[2001]: time="2025-05-27T02:47:34.511813344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:34.513496 containerd[2001]: time="2025-05-27T02:47:34.513425472Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531735" May 27 02:47:34.514393 containerd[2001]: time="2025-05-27T02:47:34.514342752Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:34.520290 containerd[2001]: time="2025-05-27T02:47:34.520228044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:34.522877 containerd[2001]: time="2025-05-27T02:47:34.522136716Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 1.581986636s" May 27 02:47:34.522877 containerd[2001]: time="2025-05-27T02:47:34.522206496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 27 02:47:34.523432 containerd[2001]: time="2025-05-27T02:47:34.523374804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 02:47:35.677155 containerd[2001]: time="2025-05-27T02:47:35.676952197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:35.678894 containerd[2001]: time="2025-05-27T02:47:35.678814094Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293731" May 27 02:47:35.679405 containerd[2001]: time="2025-05-27T02:47:35.679353422Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:35.685161 containerd[2001]: time="2025-05-27T02:47:35.685074542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:35.687289 containerd[2001]: time="2025-05-27T02:47:35.687232334Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.16352219s" May 27 02:47:35.687507 containerd[2001]: time="2025-05-27T02:47:35.687472910Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 27 02:47:35.688709 containerd[2001]: time="2025-05-27T02:47:35.688640414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 02:47:36.935101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087020689.mount: Deactivated successfully. May 27 02:47:37.608001 containerd[2001]: time="2025-05-27T02:47:37.607911291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:37.610147 containerd[2001]: time="2025-05-27T02:47:37.610056279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196004" May 27 02:47:37.612988 containerd[2001]: time="2025-05-27T02:47:37.612883419Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:37.617495 containerd[2001]: time="2025-05-27T02:47:37.617379183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:37.618862 containerd[2001]: time="2025-05-27T02:47:37.618626715Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.929915609s" May 27 02:47:37.618862 containerd[2001]: time="2025-05-27T02:47:37.618688131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 27 02:47:37.620511 containerd[2001]: time="2025-05-27T02:47:37.620372163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 02:47:38.188319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2259696161.mount: Deactivated successfully. May 27 02:47:38.898765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 02:47:38.902594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:39.277115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:39.296417 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:47:39.426607 kubelet[2722]: E0527 02:47:39.426292 2722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:47:39.434585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:47:39.436049 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:47:39.437109 systemd[1]: kubelet.service: Consumed 339ms CPU time, 104.8M memory peak. May 27 02:47:39.681875 containerd[2001]: time="2025-05-27T02:47:39.681780257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:39.685047 containerd[2001]: time="2025-05-27T02:47:39.684984725Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" May 27 02:47:39.687458 containerd[2001]: time="2025-05-27T02:47:39.687359573Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:39.693331 containerd[2001]: time="2025-05-27T02:47:39.693213365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:39.695950 containerd[2001]: time="2025-05-27T02:47:39.695245841Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.074814866s" May 27 02:47:39.695950 containerd[2001]: time="2025-05-27T02:47:39.695327681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 27 02:47:39.696259 containerd[2001]: time="2025-05-27T02:47:39.696224921Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 02:47:40.199808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355910841.mount: Deactivated successfully. May 27 02:47:40.216209 containerd[2001]: time="2025-05-27T02:47:40.216125440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:47:40.219136 containerd[2001]: time="2025-05-27T02:47:40.219052528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 27 02:47:40.221814 containerd[2001]: time="2025-05-27T02:47:40.221716384Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:47:40.227954 containerd[2001]: time="2025-05-27T02:47:40.227805124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:47:40.229709 containerd[2001]: time="2025-05-27T02:47:40.229278256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 533.001075ms" May 27 02:47:40.229709 containerd[2001]: time="2025-05-27T02:47:40.229336408Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 02:47:40.230080 containerd[2001]: time="2025-05-27T02:47:40.230021248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 02:47:42.136530 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 02:47:42.830154 containerd[2001]: time="2025-05-27T02:47:42.830071497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:42.831958 containerd[2001]: time="2025-05-27T02:47:42.831759189Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230163" May 27 02:47:42.833117 containerd[2001]: time="2025-05-27T02:47:42.832975197Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:42.839532 containerd[2001]: time="2025-05-27T02:47:42.839468121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:42.842472 containerd[2001]: time="2025-05-27T02:47:42.842192469Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.612104417s" May 27 02:47:42.842472 containerd[2001]: time="2025-05-27T02:47:42.842265885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 27 02:47:49.648293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 02:47:49.654184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:50.007611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:50.026707 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:47:50.103656 kubelet[2776]: E0527 02:47:50.103580 2776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:47:50.108729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:47:50.109671 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:47:50.110626 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.9M memory peak. May 27 02:47:50.744938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:50.745314 systemd[1]: kubelet.service: Consumed 297ms CPU time, 106.9M memory peak. May 27 02:47:50.749242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:50.803787 systemd[1]: Reload requested from client PID 2790 ('systemctl') (unit session-9.scope)... May 27 02:47:50.803823 systemd[1]: Reloading... May 27 02:47:51.042906 zram_generator::config[2837]: No configuration found. May 27 02:47:51.247208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:47:51.507076 systemd[1]: Reloading finished in 702 ms. May 27 02:47:51.627624 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 02:47:51.627877 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 02:47:51.628557 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:51.628701 systemd[1]: kubelet.service: Consumed 220ms CPU time, 95M memory peak. May 27 02:47:51.632255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:51.996561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:52.021488 (kubelet)[2897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:47:52.099338 kubelet[2897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:47:52.099338 kubelet[2897]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:47:52.099338 kubelet[2897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:47:52.099895 kubelet[2897]: I0527 02:47:52.099381 2897 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:47:55.425881 kubelet[2897]: I0527 02:47:55.425774 2897 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 02:47:55.425881 kubelet[2897]: I0527 02:47:55.425824 2897 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:47:55.426941 kubelet[2897]: I0527 02:47:55.426234 2897 server.go:956] "Client rotation is on, will bootstrap in background" May 27 02:47:55.463672 kubelet[2897]: E0527 02:47:55.463612 2897 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.27.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 02:47:55.467494 kubelet[2897]: I0527 02:47:55.467254 2897 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:47:55.486893 kubelet[2897]: I0527 02:47:55.486654 2897 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:47:55.492176 kubelet[2897]: I0527 02:47:55.492080 2897 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:47:55.492609 kubelet[2897]: I0527 02:47:55.492562 2897 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:47:55.492899 kubelet[2897]: I0527 02:47:55.492613 2897 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-90","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:47:55.493085 kubelet[2897]: I0527 02:47:55.492932 2897 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:47:55.493085 kubelet[2897]: I0527 02:47:55.492953 2897 container_manager_linux.go:303] "Creating device plugin manager" May 27 02:47:55.494618 kubelet[2897]: I0527 02:47:55.494569 2897 state_mem.go:36] "Initialized new in-memory state store" May 27 02:47:55.500712 kubelet[2897]: I0527 02:47:55.500525 2897 kubelet.go:480] "Attempting to sync node with API server" May 27 02:47:55.500712 kubelet[2897]: I0527 02:47:55.500574 2897 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:47:55.502881 kubelet[2897]: I0527 02:47:55.502466 2897 kubelet.go:386] "Adding apiserver pod source" May 27 02:47:55.502881 kubelet[2897]: I0527 02:47:55.502523 2897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:47:55.508017 kubelet[2897]: I0527 02:47:55.507981 2897 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:47:55.509170 kubelet[2897]: I0527 02:47:55.509138 2897 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 02:47:55.509419 kubelet[2897]: W0527 02:47:55.509399 2897 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 02:47:55.515287 kubelet[2897]: I0527 02:47:55.515255 2897 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:47:55.515469 kubelet[2897]: I0527 02:47:55.515449 2897 server.go:1289] "Started kubelet" May 27 02:47:55.515912 kubelet[2897]: E0527 02:47:55.515805 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-90&limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 02:47:55.518457 kubelet[2897]: E0527 02:47:55.518401 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 02:47:55.522176 kubelet[2897]: I0527 02:47:55.521755 2897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:47:55.524046 kubelet[2897]: I0527 02:47:55.523976 2897 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:47:55.525682 kubelet[2897]: I0527 02:47:55.525626 2897 server.go:317] "Adding debug handlers to kubelet server" May 27 02:47:55.527389 kubelet[2897]: I0527 02:47:55.527299 2897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:47:55.527721 kubelet[2897]: I0527 02:47:55.527682 2897 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:47:55.528808 kubelet[2897]: I0527 02:47:55.528778 2897 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:47:55.529674 kubelet[2897]: E0527 02:47:55.529482 2897 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-90\" not found" May 27 02:47:55.530358 kubelet[2897]: I0527 02:47:55.530325 2897 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:47:55.530572 kubelet[2897]: I0527 02:47:55.530552 2897 reconciler.go:26] "Reconciler: start to sync state" May 27 02:47:55.531502 kubelet[2897]: E0527 02:47:55.531440 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 02:47:55.532106 kubelet[2897]: E0527 02:47:55.532043 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-90?timeout=10s\": dial tcp 172.31.27.90:6443: connect: connection refused" interval="200ms" May 27 02:47:55.532616 kubelet[2897]: I0527 02:47:55.532590 2897 factory.go:223] Registration of the systemd container factory successfully May 27 02:47:55.532928 kubelet[2897]: I0527 02:47:55.532898 2897 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:47:55.534925 kubelet[2897]: I0527 02:47:55.534894 2897 factory.go:223] Registration of the containerd container factory successfully May 27 02:47:55.540550 kubelet[2897]: I0527 02:47:55.535471 2897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:47:55.541993 kubelet[2897]: E0527 02:47:55.539550 2897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.90:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.90:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-90.1843426382e96848 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-90,UID:ip-172-31-27-90,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-90,},FirstTimestamp:2025-05-27 02:47:55.5154146 +0000 UTC m=+3.486399534,LastTimestamp:2025-05-27 02:47:55.5154146 +0000 UTC m=+3.486399534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-90,}" May 27 02:47:55.569268 kubelet[2897]: E0527 02:47:55.568520 2897 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:47:55.575780 kubelet[2897]: I0527 02:47:55.575348 2897 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:47:55.575780 kubelet[2897]: I0527 02:47:55.575379 2897 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:47:55.575780 kubelet[2897]: I0527 02:47:55.575408 2897 state_mem.go:36] "Initialized new in-memory state store" May 27 02:47:55.579756 kubelet[2897]: I0527 02:47:55.579720 2897 policy_none.go:49] "None policy: Start" May 27 02:47:55.580352 kubelet[2897]: I0527 02:47:55.579956 2897 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:47:55.580352 kubelet[2897]: I0527 02:47:55.579990 2897 state_mem.go:35] "Initializing new in-memory state store" May 27 02:47:55.592827 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 02:47:55.600131 kubelet[2897]: I0527 02:47:55.600013 2897 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 02:47:55.608008 kubelet[2897]: I0527 02:47:55.607292 2897 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 02:47:55.608008 kubelet[2897]: I0527 02:47:55.607962 2897 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 02:47:55.609376 kubelet[2897]: I0527 02:47:55.609330 2897 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:47:55.609376 kubelet[2897]: I0527 02:47:55.609365 2897 kubelet.go:2436] "Starting kubelet main sync loop" May 27 02:47:55.609568 kubelet[2897]: E0527 02:47:55.609432 2897 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:47:55.611996 kubelet[2897]: E0527 02:47:55.611719 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 02:47:55.618231 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 02:47:55.626183 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 02:47:55.630596 kubelet[2897]: E0527 02:47:55.630548 2897 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-90\" not found" May 27 02:47:55.637785 kubelet[2897]: E0527 02:47:55.637746 2897 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 02:47:55.638091 kubelet[2897]: I0527 02:47:55.638062 2897 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:47:55.638175 kubelet[2897]: I0527 02:47:55.638090 2897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:47:55.641489 kubelet[2897]: I0527 02:47:55.641440 2897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:47:55.645199 kubelet[2897]: E0527 02:47:55.644935 2897 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:47:55.645364 kubelet[2897]: E0527 02:47:55.645224 2897 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-90\" not found" May 27 02:47:55.731468 systemd[1]: Created slice kubepods-burstable-pod1c0d6e259e3032b12e81fc8a0559bf6d.slice - libcontainer container kubepods-burstable-pod1c0d6e259e3032b12e81fc8a0559bf6d.slice. May 27 02:47:55.737107 kubelet[2897]: I0527 02:47:55.736522 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:47:55.737107 kubelet[2897]: I0527 02:47:55.736591 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:47:55.737107 kubelet[2897]: I0527 02:47:55.736632 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:47:55.737107 kubelet[2897]: I0527 02:47:55.736666 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:47:55.737107 kubelet[2897]: I0527 02:47:55.736706 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e488bcebd606a9dda53a6384e883616-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-90\" (UID: \"4e488bcebd606a9dda53a6384e883616\") " pod="kube-system/kube-scheduler-ip-172-31-27-90" May 27 02:47:55.737446 kubelet[2897]: I0527 02:47:55.736739 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c0d6e259e3032b12e81fc8a0559bf6d-ca-certs\") pod \"kube-apiserver-ip-172-31-27-90\" (UID: \"1c0d6e259e3032b12e81fc8a0559bf6d\") " pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:47:55.737446 kubelet[2897]: E0527 02:47:55.736760 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-90?timeout=10s\": dial tcp 172.31.27.90:6443: connect: connection refused" interval="400ms" May 27 02:47:55.737446 kubelet[2897]: I0527 02:47:55.736776 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c0d6e259e3032b12e81fc8a0559bf6d-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-90\" (UID: \"1c0d6e259e3032b12e81fc8a0559bf6d\") " pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:47:55.738639 kubelet[2897]: I0527 02:47:55.736827 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c0d6e259e3032b12e81fc8a0559bf6d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-90\" (UID: \"1c0d6e259e3032b12e81fc8a0559bf6d\") " pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:47:55.738639 kubelet[2897]: I0527 02:47:55.738114 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:47:55.742004 kubelet[2897]: I0527 02:47:55.741963 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-90" May 27 02:47:55.742555 kubelet[2897]: E0527 02:47:55.742491 2897 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.90:6443/api/v1/nodes\": dial tcp 172.31.27.90:6443: connect: connection refused" node="ip-172-31-27-90" May 27 02:47:55.757628 kubelet[2897]: E0527 02:47:55.757576 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:55.766952 systemd[1]: Created slice kubepods-burstable-podaef1de9f3f99ec48a5250963614ffe2d.slice - libcontainer container kubepods-burstable-podaef1de9f3f99ec48a5250963614ffe2d.slice. May 27 02:47:55.781770 kubelet[2897]: E0527 02:47:55.781720 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:55.786765 systemd[1]: Created slice kubepods-burstable-pod4e488bcebd606a9dda53a6384e883616.slice - libcontainer container kubepods-burstable-pod4e488bcebd606a9dda53a6384e883616.slice. May 27 02:47:55.790878 kubelet[2897]: E0527 02:47:55.790710 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:55.945568 kubelet[2897]: I0527 02:47:55.945518 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-90" May 27 02:47:55.946179 kubelet[2897]: E0527 02:47:55.946135 2897 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.90:6443/api/v1/nodes\": dial tcp 172.31.27.90:6443: connect: connection refused" node="ip-172-31-27-90" May 27 02:47:56.059757 containerd[2001]: time="2025-05-27T02:47:56.059294095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-90,Uid:1c0d6e259e3032b12e81fc8a0559bf6d,Namespace:kube-system,Attempt:0,}" May 27 02:47:56.083927 containerd[2001]: time="2025-05-27T02:47:56.083828803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-90,Uid:aef1de9f3f99ec48a5250963614ffe2d,Namespace:kube-system,Attempt:0,}" May 27 02:47:56.092427 containerd[2001]: time="2025-05-27T02:47:56.092351863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-90,Uid:4e488bcebd606a9dda53a6384e883616,Namespace:kube-system,Attempt:0,}" May 27 02:47:56.117166 containerd[2001]: time="2025-05-27T02:47:56.117106039Z" level=info msg="connecting to shim fd0b2a85e72eb3aaf9728848a8cb7d70aa312de57e0f952194f81acd451ce810" address="unix:///run/containerd/s/31c713fdf1f568457f6a8bff3db995c7e024dab41f50ac33cfee37ec2732fd82" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:56.138197 kubelet[2897]: E0527 02:47:56.138117 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-90?timeout=10s\": dial tcp 172.31.27.90:6443: connect: connection refused" interval="800ms" May 27 02:47:56.186592 containerd[2001]: time="2025-05-27T02:47:56.186405151Z" level=info msg="connecting to shim 0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e" address="unix:///run/containerd/s/ba456ecbb3a039f4ee54aca62e497413e4f8d6fa2ce46dda3be8afb4effcaaad" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:56.193477 containerd[2001]: time="2025-05-27T02:47:56.193191667Z" level=info msg="connecting to shim 098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904" address="unix:///run/containerd/s/bdd260172242c4b9902cf38e0de3c5ba22aa077aa82f11b85f2390f8de7c5368" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:56.201118 systemd[1]: Started cri-containerd-fd0b2a85e72eb3aaf9728848a8cb7d70aa312de57e0f952194f81acd451ce810.scope - libcontainer container fd0b2a85e72eb3aaf9728848a8cb7d70aa312de57e0f952194f81acd451ce810. May 27 02:47:56.269221 systemd[1]: Started cri-containerd-0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e.scope - libcontainer container 0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e. May 27 02:47:56.279591 systemd[1]: Started cri-containerd-098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904.scope - libcontainer container 098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904. May 27 02:47:56.340429 containerd[2001]: time="2025-05-27T02:47:56.340244624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-90,Uid:1c0d6e259e3032b12e81fc8a0559bf6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd0b2a85e72eb3aaf9728848a8cb7d70aa312de57e0f952194f81acd451ce810\"" May 27 02:47:56.353938 kubelet[2897]: I0527 02:47:56.353725 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-90" May 27 02:47:56.355626 kubelet[2897]: E0527 02:47:56.355505 2897 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.27.90:6443/api/v1/nodes\": dial tcp 172.31.27.90:6443: connect: connection refused" node="ip-172-31-27-90" May 27 02:47:56.355785 containerd[2001]: time="2025-05-27T02:47:56.355585400Z" level=info msg="CreateContainer within sandbox \"fd0b2a85e72eb3aaf9728848a8cb7d70aa312de57e0f952194f81acd451ce810\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 02:47:56.389862 containerd[2001]: time="2025-05-27T02:47:56.386808428Z" level=info msg="Container c33558de446d1318f395151bd17ada286042ebce26f135d940eb508df3ec7016: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:56.414249 containerd[2001]: time="2025-05-27T02:47:56.414127520Z" level=info msg="CreateContainer within sandbox \"fd0b2a85e72eb3aaf9728848a8cb7d70aa312de57e0f952194f81acd451ce810\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c33558de446d1318f395151bd17ada286042ebce26f135d940eb508df3ec7016\"" May 27 02:47:56.415507 containerd[2001]: time="2025-05-27T02:47:56.415455057Z" level=info msg="StartContainer for \"c33558de446d1318f395151bd17ada286042ebce26f135d940eb508df3ec7016\"" May 27 02:47:56.419289 containerd[2001]: time="2025-05-27T02:47:56.419118945Z" level=info msg="connecting to shim c33558de446d1318f395151bd17ada286042ebce26f135d940eb508df3ec7016" address="unix:///run/containerd/s/31c713fdf1f568457f6a8bff3db995c7e024dab41f50ac33cfee37ec2732fd82" protocol=ttrpc version=3 May 27 02:47:56.432292 containerd[2001]: time="2025-05-27T02:47:56.432238521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-90,Uid:aef1de9f3f99ec48a5250963614ffe2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e\"" May 27 02:47:56.440308 containerd[2001]: time="2025-05-27T02:47:56.440199393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-90,Uid:4e488bcebd606a9dda53a6384e883616,Namespace:kube-system,Attempt:0,} returns sandbox id \"098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904\"" May 27 02:47:56.443097 containerd[2001]: time="2025-05-27T02:47:56.443046561Z" level=info msg="CreateContainer within sandbox \"0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 02:47:56.461817 containerd[2001]: time="2025-05-27T02:47:56.461746101Z" level=info msg="Container 21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:56.466249 systemd[1]: Started cri-containerd-c33558de446d1318f395151bd17ada286042ebce26f135d940eb508df3ec7016.scope - libcontainer container c33558de446d1318f395151bd17ada286042ebce26f135d940eb508df3ec7016. May 27 02:47:56.469376 containerd[2001]: time="2025-05-27T02:47:56.469314933Z" level=info msg="CreateContainer within sandbox \"098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 02:47:56.487973 update_engine[1978]: I20250527 02:47:56.487892 1978 update_attempter.cc:509] Updating boot flags... May 27 02:47:56.489551 containerd[2001]: time="2025-05-27T02:47:56.488567733Z" level=info msg="CreateContainer within sandbox \"0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57\"" May 27 02:47:56.491872 containerd[2001]: time="2025-05-27T02:47:56.490695897Z" level=info msg="StartContainer for \"21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57\"" May 27 02:47:56.496327 containerd[2001]: time="2025-05-27T02:47:56.496241661Z" level=info msg="connecting to shim 21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57" address="unix:///run/containerd/s/ba456ecbb3a039f4ee54aca62e497413e4f8d6fa2ce46dda3be8afb4effcaaad" protocol=ttrpc version=3 May 27 02:47:56.497442 containerd[2001]: time="2025-05-27T02:47:56.497397309Z" level=info msg="Container ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:56.527129 containerd[2001]: time="2025-05-27T02:47:56.526946589Z" level=info msg="CreateContainer within sandbox \"098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26\"" May 27 02:47:56.529854 containerd[2001]: time="2025-05-27T02:47:56.529789941Z" level=info msg="StartContainer for \"ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26\"" May 27 02:47:56.534181 containerd[2001]: time="2025-05-27T02:47:56.534126537Z" level=info msg="connecting to shim ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26" address="unix:///run/containerd/s/bdd260172242c4b9902cf38e0de3c5ba22aa077aa82f11b85f2390f8de7c5368" protocol=ttrpc version=3 May 27 02:47:56.571241 systemd[1]: Started cri-containerd-21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57.scope - libcontainer container 21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57. May 27 02:47:56.643462 kubelet[2897]: E0527 02:47:56.642323 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.27.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 02:47:56.651620 kubelet[2897]: E0527 02:47:56.649827 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.27.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-90&limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 02:47:56.759160 systemd[1]: Started cri-containerd-ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26.scope - libcontainer container ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26. May 27 02:47:56.775418 containerd[2001]: time="2025-05-27T02:47:56.775104934Z" level=info msg="StartContainer for \"c33558de446d1318f395151bd17ada286042ebce26f135d940eb508df3ec7016\" returns successfully" May 27 02:47:56.869486 containerd[2001]: time="2025-05-27T02:47:56.869408507Z" level=info msg="StartContainer for \"21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57\" returns successfully" May 27 02:47:56.883308 kubelet[2897]: E0527 02:47:56.883239 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.27.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 02:47:56.940908 kubelet[2897]: E0527 02:47:56.939432 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-90?timeout=10s\": dial tcp 172.31.27.90:6443: connect: connection refused" interval="1.6s" May 27 02:47:56.995583 kubelet[2897]: E0527 02:47:56.995537 2897 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.27.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 02:47:57.166393 kubelet[2897]: I0527 02:47:57.164991 2897 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-90" May 27 02:47:57.311879 containerd[2001]: time="2025-05-27T02:47:57.309377529Z" level=info msg="StartContainer for \"ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26\" returns successfully" May 27 02:47:57.824314 kubelet[2897]: E0527 02:47:57.823968 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:57.841784 kubelet[2897]: E0527 02:47:57.838814 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:57.849097 kubelet[2897]: E0527 02:47:57.849062 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:58.852242 kubelet[2897]: E0527 02:47:58.851812 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:58.854069 kubelet[2897]: E0527 02:47:58.853678 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:58.856867 kubelet[2897]: E0527 02:47:58.855740 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:59.853764 kubelet[2897]: E0527 02:47:59.852816 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:47:59.853764 kubelet[2897]: E0527 02:47:59.853472 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:48:01.057022 kubelet[2897]: E0527 02:48:01.056933 2897 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:48:01.175988 kubelet[2897]: E0527 02:48:01.175137 2897 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-27-90\" not found" node="ip-172-31-27-90" May 27 02:48:01.202314 kubelet[2897]: I0527 02:48:01.202264 2897 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-90" May 27 02:48:01.231044 kubelet[2897]: I0527 02:48:01.230988 2897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:01.281946 kubelet[2897]: E0527 02:48:01.281870 2897 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-90\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:01.282286 kubelet[2897]: I0527 02:48:01.281916 2897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:01.290292 kubelet[2897]: E0527 02:48:01.290216 2897 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-27-90\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:01.291867 kubelet[2897]: I0527 02:48:01.290262 2897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-90" May 27 02:48:01.304703 kubelet[2897]: E0527 02:48:01.304635 2897 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-27-90\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-90" May 27 02:48:01.520572 kubelet[2897]: I0527 02:48:01.520262 2897 apiserver.go:52] "Watching apiserver" May 27 02:48:01.631439 kubelet[2897]: I0527 02:48:01.631383 2897 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:48:04.365707 systemd[1]: Reload requested from client PID 3363 ('systemctl') (unit session-9.scope)... May 27 02:48:04.366238 systemd[1]: Reloading... May 27 02:48:04.551883 zram_generator::config[3413]: No configuration found. May 27 02:48:04.739662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:48:05.057072 systemd[1]: Reloading finished in 690 ms. May 27 02:48:05.102436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:48:05.128226 systemd[1]: kubelet.service: Deactivated successfully. May 27 02:48:05.130020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:48:05.130123 systemd[1]: kubelet.service: Consumed 4.315s CPU time, 128.5M memory peak. May 27 02:48:05.134780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:48:05.509622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:48:05.528482 (kubelet)[3467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:48:05.622195 kubelet[3467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:48:05.622195 kubelet[3467]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:48:05.622195 kubelet[3467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:48:05.622739 kubelet[3467]: I0527 02:48:05.622307 3467 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:48:05.639888 kubelet[3467]: I0527 02:48:05.639587 3467 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 02:48:05.639888 kubelet[3467]: I0527 02:48:05.639630 3467 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:48:05.640414 kubelet[3467]: I0527 02:48:05.640391 3467 server.go:956] "Client rotation is on, will bootstrap in background" May 27 02:48:05.644328 kubelet[3467]: I0527 02:48:05.644271 3467 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 02:48:05.655900 kubelet[3467]: I0527 02:48:05.655331 3467 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:48:05.670885 kubelet[3467]: I0527 02:48:05.670331 3467 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:48:05.677153 kubelet[3467]: I0527 02:48:05.677104 3467 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:48:05.677655 kubelet[3467]: I0527 02:48:05.677610 3467 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:48:05.678533 kubelet[3467]: I0527 02:48:05.677656 3467 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-90","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:48:05.678721 kubelet[3467]: I0527 02:48:05.678542 3467 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:48:05.678721 kubelet[3467]: I0527 02:48:05.678564 3467 container_manager_linux.go:303] "Creating device plugin manager" May 27 02:48:05.678721 kubelet[3467]: I0527 02:48:05.678645 3467 state_mem.go:36] "Initialized new in-memory state store" May 27 02:48:05.679081 kubelet[3467]: I0527 02:48:05.679035 3467 kubelet.go:480] "Attempting to sync node with API server" May 27 02:48:05.679081 kubelet[3467]: I0527 02:48:05.679087 3467 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:48:05.679229 kubelet[3467]: I0527 02:48:05.679135 3467 kubelet.go:386] "Adding apiserver pod source" May 27 02:48:05.679229 kubelet[3467]: I0527 02:48:05.679176 3467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:48:05.689855 kubelet[3467]: I0527 02:48:05.688232 3467 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:48:05.691860 kubelet[3467]: I0527 02:48:05.690558 3467 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 02:48:05.702856 kubelet[3467]: I0527 02:48:05.702199 3467 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:48:05.702856 kubelet[3467]: I0527 02:48:05.702289 3467 server.go:1289] "Started kubelet" May 27 02:48:05.704306 kubelet[3467]: I0527 02:48:05.704258 3467 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:48:05.705666 kubelet[3467]: I0527 02:48:05.705580 3467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:48:05.708822 kubelet[3467]: I0527 02:48:05.708752 3467 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:48:05.708822 kubelet[3467]: I0527 02:48:05.706066 3467 server.go:317] "Adding debug handlers to kubelet server" May 27 02:48:05.711286 kubelet[3467]: I0527 02:48:05.711253 3467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:48:05.714184 kubelet[3467]: I0527 02:48:05.714101 3467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:48:05.723971 kubelet[3467]: E0527 02:48:05.723200 3467 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-27-90\" not found" May 27 02:48:05.752883 kubelet[3467]: I0527 02:48:05.752662 3467 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:48:05.753051 kubelet[3467]: I0527 02:48:05.752897 3467 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:48:05.753163 kubelet[3467]: I0527 02:48:05.753131 3467 reconciler.go:26] "Reconciler: start to sync state" May 27 02:48:05.753163 kubelet[3467]: I0527 02:48:05.753693 3467 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:48:05.800965 kubelet[3467]: I0527 02:48:05.799822 3467 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 02:48:05.810297 kubelet[3467]: I0527 02:48:05.809828 3467 factory.go:223] Registration of the containerd container factory successfully May 27 02:48:05.810297 kubelet[3467]: I0527 02:48:05.809993 3467 factory.go:223] Registration of the systemd container factory successfully May 27 02:48:05.812562 kubelet[3467]: I0527 02:48:05.812270 3467 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 02:48:05.812817 kubelet[3467]: I0527 02:48:05.812793 3467 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 02:48:05.813854 kubelet[3467]: I0527 02:48:05.812991 3467 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:48:05.813854 kubelet[3467]: I0527 02:48:05.813014 3467 kubelet.go:2436] "Starting kubelet main sync loop" May 27 02:48:05.822007 kubelet[3467]: E0527 02:48:05.821949 3467 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:48:05.835322 kubelet[3467]: E0527 02:48:05.835103 3467 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:48:05.922201 kubelet[3467]: E0527 02:48:05.922149 3467 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 02:48:05.937876 kubelet[3467]: I0527 02:48:05.937829 3467 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:48:05.938162 kubelet[3467]: I0527 02:48:05.938127 3467 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:48:05.938273 kubelet[3467]: I0527 02:48:05.938256 3467 state_mem.go:36] "Initialized new in-memory state store" May 27 02:48:05.938583 kubelet[3467]: I0527 02:48:05.938559 3467 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 02:48:05.938702 kubelet[3467]: I0527 02:48:05.938666 3467 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 02:48:05.938800 kubelet[3467]: I0527 02:48:05.938783 3467 policy_none.go:49] "None policy: Start" May 27 02:48:05.938957 kubelet[3467]: I0527 02:48:05.938937 3467 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:48:05.939054 kubelet[3467]: I0527 02:48:05.939037 3467 state_mem.go:35] "Initializing new in-memory state store" May 27 02:48:05.939337 kubelet[3467]: I0527 02:48:05.939319 3467 state_mem.go:75] "Updated machine memory state" May 27 02:48:05.959869 kubelet[3467]: E0527 02:48:05.959342 3467 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 02:48:05.959869 kubelet[3467]: I0527 02:48:05.959618 3467 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:48:05.959869 kubelet[3467]: I0527 02:48:05.959637 3467 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:48:05.960648 kubelet[3467]: I0527 02:48:05.960622 3467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:48:05.964151 kubelet[3467]: E0527 02:48:05.964114 3467 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:48:06.090894 kubelet[3467]: I0527 02:48:06.089617 3467 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-27-90" May 27 02:48:06.103353 kubelet[3467]: I0527 02:48:06.103289 3467 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-27-90" May 27 02:48:06.103515 kubelet[3467]: I0527 02:48:06.103407 3467 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-27-90" May 27 02:48:06.126376 kubelet[3467]: I0527 02:48:06.126324 3467 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:06.127448 kubelet[3467]: I0527 02:48:06.127367 3467 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:06.130858 kubelet[3467]: I0527 02:48:06.130559 3467 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-27-90" May 27 02:48:06.156193 kubelet[3467]: I0527 02:48:06.156067 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e488bcebd606a9dda53a6384e883616-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-90\" (UID: \"4e488bcebd606a9dda53a6384e883616\") " pod="kube-system/kube-scheduler-ip-172-31-27-90" May 27 02:48:06.156352 kubelet[3467]: I0527 02:48:06.156233 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c0d6e259e3032b12e81fc8a0559bf6d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-90\" (UID: \"1c0d6e259e3032b12e81fc8a0559bf6d\") " pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:06.156352 kubelet[3467]: I0527 02:48:06.156298 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:06.156462 kubelet[3467]: I0527 02:48:06.156344 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:06.156462 kubelet[3467]: I0527 02:48:06.156383 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c0d6e259e3032b12e81fc8a0559bf6d-ca-certs\") pod \"kube-apiserver-ip-172-31-27-90\" (UID: \"1c0d6e259e3032b12e81fc8a0559bf6d\") " pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:06.156462 kubelet[3467]: I0527 02:48:06.156418 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c0d6e259e3032b12e81fc8a0559bf6d-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-90\" (UID: \"1c0d6e259e3032b12e81fc8a0559bf6d\") " pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:06.156462 kubelet[3467]: I0527 02:48:06.156459 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:06.156666 kubelet[3467]: I0527 02:48:06.156495 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:06.156666 kubelet[3467]: I0527 02:48:06.156532 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aef1de9f3f99ec48a5250963614ffe2d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-90\" (UID: \"aef1de9f3f99ec48a5250963614ffe2d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-90" May 27 02:48:06.681802 kubelet[3467]: I0527 02:48:06.681455 3467 apiserver.go:52] "Watching apiserver" May 27 02:48:06.753256 kubelet[3467]: I0527 02:48:06.753178 3467 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:48:06.881342 kubelet[3467]: I0527 02:48:06.881282 3467 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:06.893910 kubelet[3467]: E0527 02:48:06.893849 3467 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-27-90\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-90" May 27 02:48:06.959053 kubelet[3467]: I0527 02:48:06.957297 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-90" podStartSLOduration=0.957271365 podStartE2EDuration="957.271365ms" podCreationTimestamp="2025-05-27 02:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:48:06.937035465 +0000 UTC m=+1.397573696" watchObservedRunningTime="2025-05-27 02:48:06.957271365 +0000 UTC m=+1.417809572" May 27 02:48:06.982776 kubelet[3467]: I0527 02:48:06.982565 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-90" podStartSLOduration=0.982543605 podStartE2EDuration="982.543605ms" podCreationTimestamp="2025-05-27 02:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:48:06.961456509 +0000 UTC m=+1.421994716" watchObservedRunningTime="2025-05-27 02:48:06.982543605 +0000 UTC m=+1.443081824" May 27 02:48:07.014220 kubelet[3467]: I0527 02:48:07.014103 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-90" podStartSLOduration=1.014052005 podStartE2EDuration="1.014052005s" podCreationTimestamp="2025-05-27 02:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:48:06.982876533 +0000 UTC m=+1.443414824" watchObservedRunningTime="2025-05-27 02:48:07.014052005 +0000 UTC m=+1.474590224" May 27 02:48:09.711654 kubelet[3467]: I0527 02:48:09.711127 3467 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 02:48:09.714769 containerd[2001]: time="2025-05-27T02:48:09.714698675Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 02:48:09.720500 kubelet[3467]: I0527 02:48:09.720289 3467 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 02:48:10.775812 systemd[1]: Created slice kubepods-besteffort-pod83e6f51a_9bbc_4c87_88fe_ea2fb09f80d8.slice - libcontainer container kubepods-besteffort-pod83e6f51a_9bbc_4c87_88fe_ea2fb09f80d8.slice. May 27 02:48:10.790869 kubelet[3467]: I0527 02:48:10.788212 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8-xtables-lock\") pod \"kube-proxy-b95bk\" (UID: \"83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8\") " pod="kube-system/kube-proxy-b95bk" May 27 02:48:10.791616 kubelet[3467]: I0527 02:48:10.791571 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2jm\" (UniqueName: \"kubernetes.io/projected/83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8-kube-api-access-pj2jm\") pod \"kube-proxy-b95bk\" (UID: \"83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8\") " pod="kube-system/kube-proxy-b95bk" May 27 02:48:10.791775 kubelet[3467]: I0527 02:48:10.791747 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8-kube-proxy\") pod \"kube-proxy-b95bk\" (UID: \"83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8\") " pod="kube-system/kube-proxy-b95bk" May 27 02:48:10.792060 kubelet[3467]: I0527 02:48:10.791986 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8-lib-modules\") pod \"kube-proxy-b95bk\" (UID: \"83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8\") " pod="kube-system/kube-proxy-b95bk" May 27 02:48:10.993785 kubelet[3467]: I0527 02:48:10.993726 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f0fef8da-52f8-47e7-9d2d-5cdd050a6a3f-var-lib-calico\") pod \"tigera-operator-844669ff44-sj6r7\" (UID: \"f0fef8da-52f8-47e7-9d2d-5cdd050a6a3f\") " pod="tigera-operator/tigera-operator-844669ff44-sj6r7" May 27 02:48:10.993951 kubelet[3467]: I0527 02:48:10.993809 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnjx7\" (UniqueName: \"kubernetes.io/projected/f0fef8da-52f8-47e7-9d2d-5cdd050a6a3f-kube-api-access-jnjx7\") pod \"tigera-operator-844669ff44-sj6r7\" (UID: \"f0fef8da-52f8-47e7-9d2d-5cdd050a6a3f\") " pod="tigera-operator/tigera-operator-844669ff44-sj6r7" May 27 02:48:11.007509 systemd[1]: Created slice kubepods-besteffort-podf0fef8da_52f8_47e7_9d2d_5cdd050a6a3f.slice - libcontainer container kubepods-besteffort-podf0fef8da_52f8_47e7_9d2d_5cdd050a6a3f.slice. May 27 02:48:11.091784 containerd[2001]: time="2025-05-27T02:48:11.091229865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b95bk,Uid:83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8,Namespace:kube-system,Attempt:0,}" May 27 02:48:11.141243 containerd[2001]: time="2025-05-27T02:48:11.140976562Z" level=info msg="connecting to shim 79c3f68f424443ca528d9dee00f7fb8125ba42a3b51e21aeb01c1d3ec1750bea" address="unix:///run/containerd/s/29f666465a7a842113d6fb0b18d7b0afc7de65ca28f010e88f907bafd6903138" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:11.188166 systemd[1]: Started cri-containerd-79c3f68f424443ca528d9dee00f7fb8125ba42a3b51e21aeb01c1d3ec1750bea.scope - libcontainer container 79c3f68f424443ca528d9dee00f7fb8125ba42a3b51e21aeb01c1d3ec1750bea. May 27 02:48:11.245803 containerd[2001]: time="2025-05-27T02:48:11.245734054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b95bk,Uid:83e6f51a-9bbc-4c87-88fe-ea2fb09f80d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"79c3f68f424443ca528d9dee00f7fb8125ba42a3b51e21aeb01c1d3ec1750bea\"" May 27 02:48:11.257861 containerd[2001]: time="2025-05-27T02:48:11.257762446Z" level=info msg="CreateContainer within sandbox \"79c3f68f424443ca528d9dee00f7fb8125ba42a3b51e21aeb01c1d3ec1750bea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 02:48:11.278549 containerd[2001]: time="2025-05-27T02:48:11.278138794Z" level=info msg="Container 713a02d6f14061298a62b0ba87bef00bdb602d4e5ad0577c400d373ffe5467e8: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:11.294704 containerd[2001]: time="2025-05-27T02:48:11.294619990Z" level=info msg="CreateContainer within sandbox \"79c3f68f424443ca528d9dee00f7fb8125ba42a3b51e21aeb01c1d3ec1750bea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"713a02d6f14061298a62b0ba87bef00bdb602d4e5ad0577c400d373ffe5467e8\"" May 27 02:48:11.296693 containerd[2001]: time="2025-05-27T02:48:11.296041786Z" level=info msg="StartContainer for \"713a02d6f14061298a62b0ba87bef00bdb602d4e5ad0577c400d373ffe5467e8\"" May 27 02:48:11.299989 containerd[2001]: time="2025-05-27T02:48:11.299909170Z" level=info msg="connecting to shim 713a02d6f14061298a62b0ba87bef00bdb602d4e5ad0577c400d373ffe5467e8" address="unix:///run/containerd/s/29f666465a7a842113d6fb0b18d7b0afc7de65ca28f010e88f907bafd6903138" protocol=ttrpc version=3 May 27 02:48:11.316310 containerd[2001]: time="2025-05-27T02:48:11.316091447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-sj6r7,Uid:f0fef8da-52f8-47e7-9d2d-5cdd050a6a3f,Namespace:tigera-operator,Attempt:0,}" May 27 02:48:11.336487 systemd[1]: Started cri-containerd-713a02d6f14061298a62b0ba87bef00bdb602d4e5ad0577c400d373ffe5467e8.scope - libcontainer container 713a02d6f14061298a62b0ba87bef00bdb602d4e5ad0577c400d373ffe5467e8. May 27 02:48:11.367816 containerd[2001]: time="2025-05-27T02:48:11.367148363Z" level=info msg="connecting to shim 4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e" address="unix:///run/containerd/s/9eb22858d27a83f571d963a28854bb658ffe8a1903f640d7dd5e66cf5903b114" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:11.426176 systemd[1]: Started cri-containerd-4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e.scope - libcontainer container 4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e. May 27 02:48:11.454851 containerd[2001]: time="2025-05-27T02:48:11.454521335Z" level=info msg="StartContainer for \"713a02d6f14061298a62b0ba87bef00bdb602d4e5ad0577c400d373ffe5467e8\" returns successfully" May 27 02:48:11.565329 containerd[2001]: time="2025-05-27T02:48:11.565187748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-sj6r7,Uid:f0fef8da-52f8-47e7-9d2d-5cdd050a6a3f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e\"" May 27 02:48:11.571788 containerd[2001]: time="2025-05-27T02:48:11.571248396Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 02:48:11.976915 kubelet[3467]: I0527 02:48:11.976757 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b95bk" podStartSLOduration=1.9767350700000001 podStartE2EDuration="1.97673507s" podCreationTimestamp="2025-05-27 02:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:48:11.935894486 +0000 UTC m=+6.396432801" watchObservedRunningTime="2025-05-27 02:48:11.97673507 +0000 UTC m=+6.437273265" May 27 02:48:13.202946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407503393.mount: Deactivated successfully. May 27 02:48:14.138809 containerd[2001]: time="2025-05-27T02:48:14.138752137Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:14.140754 containerd[2001]: time="2025-05-27T02:48:14.140697577Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 27 02:48:14.143195 containerd[2001]: time="2025-05-27T02:48:14.143105005Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:14.147554 containerd[2001]: time="2025-05-27T02:48:14.147475297Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:14.149085 containerd[2001]: time="2025-05-27T02:48:14.148885537Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.576747389s" May 27 02:48:14.149085 containerd[2001]: time="2025-05-27T02:48:14.148940497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 27 02:48:14.157884 containerd[2001]: time="2025-05-27T02:48:14.157412809Z" level=info msg="CreateContainer within sandbox \"4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 02:48:14.176580 containerd[2001]: time="2025-05-27T02:48:14.176524933Z" level=info msg="Container ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:14.195455 containerd[2001]: time="2025-05-27T02:48:14.195395557Z" level=info msg="CreateContainer within sandbox \"4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d\"" May 27 02:48:14.196785 containerd[2001]: time="2025-05-27T02:48:14.196727197Z" level=info msg="StartContainer for \"ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d\"" May 27 02:48:14.199759 containerd[2001]: time="2025-05-27T02:48:14.199637317Z" level=info msg="connecting to shim ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d" address="unix:///run/containerd/s/9eb22858d27a83f571d963a28854bb658ffe8a1903f640d7dd5e66cf5903b114" protocol=ttrpc version=3 May 27 02:48:14.245144 systemd[1]: Started cri-containerd-ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d.scope - libcontainer container ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d. May 27 02:48:14.303641 containerd[2001]: time="2025-05-27T02:48:14.303573037Z" level=info msg="StartContainer for \"ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d\" returns successfully" May 27 02:48:14.957306 kubelet[3467]: I0527 02:48:14.957213 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-sj6r7" podStartSLOduration=2.376704588 podStartE2EDuration="4.957189977s" podCreationTimestamp="2025-05-27 02:48:10 +0000 UTC" firstStartedPulling="2025-05-27 02:48:11.570041496 +0000 UTC m=+6.030579727" lastFinishedPulling="2025-05-27 02:48:14.150526909 +0000 UTC m=+8.611065116" observedRunningTime="2025-05-27 02:48:14.957020513 +0000 UTC m=+9.417558720" watchObservedRunningTime="2025-05-27 02:48:14.957189977 +0000 UTC m=+9.417728184" May 27 02:48:20.999597 sudo[2359]: pam_unix(sudo:session): session closed for user root May 27 02:48:21.027873 sshd[2358]: Connection closed by 139.178.68.195 port 54984 May 27 02:48:21.030142 sshd-session[2353]: pam_unix(sshd:session): session closed for user core May 27 02:48:21.041910 systemd[1]: sshd@8-172.31.27.90:22-139.178.68.195:54984.service: Deactivated successfully. May 27 02:48:21.052139 systemd[1]: session-9.scope: Deactivated successfully. May 27 02:48:21.054583 systemd[1]: session-9.scope: Consumed 11.522s CPU time, 234.5M memory peak. May 27 02:48:21.061500 systemd-logind[1977]: Session 9 logged out. Waiting for processes to exit. May 27 02:48:21.068463 systemd-logind[1977]: Removed session 9. May 27 02:48:33.150146 systemd[1]: Created slice kubepods-besteffort-pod6254a01e_8e0c_43d2_bfd6_b8b9139be1f6.slice - libcontainer container kubepods-besteffort-pod6254a01e_8e0c_43d2_bfd6_b8b9139be1f6.slice. May 27 02:48:33.259961 kubelet[3467]: I0527 02:48:33.259882 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hk2\" (UniqueName: \"kubernetes.io/projected/6254a01e-8e0c-43d2-bfd6-b8b9139be1f6-kube-api-access-84hk2\") pod \"calico-typha-5ffb56554b-9zhsp\" (UID: \"6254a01e-8e0c-43d2-bfd6-b8b9139be1f6\") " pod="calico-system/calico-typha-5ffb56554b-9zhsp" May 27 02:48:33.260486 kubelet[3467]: I0527 02:48:33.259972 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6254a01e-8e0c-43d2-bfd6-b8b9139be1f6-tigera-ca-bundle\") pod \"calico-typha-5ffb56554b-9zhsp\" (UID: \"6254a01e-8e0c-43d2-bfd6-b8b9139be1f6\") " pod="calico-system/calico-typha-5ffb56554b-9zhsp" May 27 02:48:33.260486 kubelet[3467]: I0527 02:48:33.260020 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6254a01e-8e0c-43d2-bfd6-b8b9139be1f6-typha-certs\") pod \"calico-typha-5ffb56554b-9zhsp\" (UID: \"6254a01e-8e0c-43d2-bfd6-b8b9139be1f6\") " pod="calico-system/calico-typha-5ffb56554b-9zhsp" May 27 02:48:33.459714 containerd[2001]: time="2025-05-27T02:48:33.459550821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ffb56554b-9zhsp,Uid:6254a01e-8e0c-43d2-bfd6-b8b9139be1f6,Namespace:calico-system,Attempt:0,}" May 27 02:48:33.469617 systemd[1]: Created slice kubepods-besteffort-pod5403733d_f853_41c7_981a_831bad61f688.slice - libcontainer container kubepods-besteffort-pod5403733d_f853_41c7_981a_831bad61f688.slice. May 27 02:48:33.546463 containerd[2001]: time="2025-05-27T02:48:33.546183357Z" level=info msg="connecting to shim 4a40c470da2a03f690f3bf99d2534f92f39fdafbc61454d5e0144e25c9f638aa" address="unix:///run/containerd/s/6ce86b31e837912c513c8ba6b19c684d9baee24fd2676499206c031ead8150b3" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:33.565132 kubelet[3467]: I0527 02:48:33.565054 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-flexvol-driver-host\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565132 kubelet[3467]: I0527 02:48:33.565135 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-var-lib-calico\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565354 kubelet[3467]: I0527 02:48:33.565180 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-var-run-calico\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565354 kubelet[3467]: I0527 02:48:33.565220 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-cni-bin-dir\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565354 kubelet[3467]: I0527 02:48:33.565256 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-policysync\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565354 kubelet[3467]: I0527 02:48:33.565291 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-lib-modules\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565354 kubelet[3467]: I0527 02:48:33.565328 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-cni-log-dir\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565601 kubelet[3467]: I0527 02:48:33.565365 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5403733d-f853-41c7-981a-831bad61f688-tigera-ca-bundle\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565601 kubelet[3467]: I0527 02:48:33.565406 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-cni-net-dir\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565601 kubelet[3467]: I0527 02:48:33.565441 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5403733d-f853-41c7-981a-831bad61f688-xtables-lock\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565601 kubelet[3467]: I0527 02:48:33.565476 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgb78\" (UniqueName: \"kubernetes.io/projected/5403733d-f853-41c7-981a-831bad61f688-kube-api-access-bgb78\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.565601 kubelet[3467]: I0527 02:48:33.565514 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5403733d-f853-41c7-981a-831bad61f688-node-certs\") pod \"calico-node-hjfsz\" (UID: \"5403733d-f853-41c7-981a-831bad61f688\") " pod="calico-system/calico-node-hjfsz" May 27 02:48:33.631317 systemd[1]: Started cri-containerd-4a40c470da2a03f690f3bf99d2534f92f39fdafbc61454d5e0144e25c9f638aa.scope - libcontainer container 4a40c470da2a03f690f3bf99d2534f92f39fdafbc61454d5e0144e25c9f638aa. May 27 02:48:33.673884 kubelet[3467]: E0527 02:48:33.673219 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.673884 kubelet[3467]: W0527 02:48:33.673264 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.673884 kubelet[3467]: E0527 02:48:33.673311 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.674139 kubelet[3467]: E0527 02:48:33.673960 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.674139 kubelet[3467]: W0527 02:48:33.673984 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.674139 kubelet[3467]: E0527 02:48:33.674013 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.676224 kubelet[3467]: E0527 02:48:33.676106 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.676224 kubelet[3467]: W0527 02:48:33.676144 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.676224 kubelet[3467]: E0527 02:48:33.676178 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.677144 kubelet[3467]: E0527 02:48:33.676610 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.677144 kubelet[3467]: W0527 02:48:33.676628 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.677144 kubelet[3467]: E0527 02:48:33.676652 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.677144 kubelet[3467]: E0527 02:48:33.676973 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.677144 kubelet[3467]: W0527 02:48:33.676991 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.677144 kubelet[3467]: E0527 02:48:33.677013 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.678294 kubelet[3467]: E0527 02:48:33.677357 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.678294 kubelet[3467]: W0527 02:48:33.677420 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.678294 kubelet[3467]: E0527 02:48:33.677444 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.678294 kubelet[3467]: E0527 02:48:33.678032 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.678294 kubelet[3467]: W0527 02:48:33.678054 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.678294 kubelet[3467]: E0527 02:48:33.678079 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.679217 kubelet[3467]: E0527 02:48:33.678960 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.679217 kubelet[3467]: W0527 02:48:33.678997 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.679217 kubelet[3467]: E0527 02:48:33.679047 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.680596 kubelet[3467]: E0527 02:48:33.680162 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.680596 kubelet[3467]: W0527 02:48:33.680190 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.680596 kubelet[3467]: E0527 02:48:33.680220 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.681007 kubelet[3467]: E0527 02:48:33.680968 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.681007 kubelet[3467]: W0527 02:48:33.681000 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.681345 kubelet[3467]: E0527 02:48:33.681029 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.682366 kubelet[3467]: E0527 02:48:33.682306 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.682366 kubelet[3467]: W0527 02:48:33.682344 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.682548 kubelet[3467]: E0527 02:48:33.682378 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.686677 kubelet[3467]: E0527 02:48:33.686629 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.686677 kubelet[3467]: W0527 02:48:33.686666 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.690080 kubelet[3467]: E0527 02:48:33.686698 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.690080 kubelet[3467]: E0527 02:48:33.689069 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.690080 kubelet[3467]: W0527 02:48:33.689097 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.690080 kubelet[3467]: E0527 02:48:33.689126 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.722887 kubelet[3467]: E0527 02:48:33.722070 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.722887 kubelet[3467]: W0527 02:48:33.722109 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.722887 kubelet[3467]: E0527 02:48:33.722140 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.783546 containerd[2001]: time="2025-05-27T02:48:33.783453790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjfsz,Uid:5403733d-f853-41c7-981a-831bad61f688,Namespace:calico-system,Attempt:0,}" May 27 02:48:33.853873 containerd[2001]: time="2025-05-27T02:48:33.851999374Z" level=info msg="connecting to shim 9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350" address="unix:///run/containerd/s/47039e623da86c20de7fb7c021ccb032cdcde9b07474aad0d13d7c28b4485ee9" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:33.871290 kubelet[3467]: E0527 02:48:33.869926 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx96z" podUID="5e891259-bbea-4c4e-9cf8-bdfb46083aeb" May 27 02:48:33.936164 kubelet[3467]: E0527 02:48:33.936119 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.936164 kubelet[3467]: W0527 02:48:33.936155 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.936384 kubelet[3467]: E0527 02:48:33.936188 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.938350 kubelet[3467]: E0527 02:48:33.938290 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.938640 kubelet[3467]: W0527 02:48:33.938332 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.938640 kubelet[3467]: E0527 02:48:33.938407 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.941208 kubelet[3467]: E0527 02:48:33.941157 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.941208 kubelet[3467]: W0527 02:48:33.941196 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.941439 kubelet[3467]: E0527 02:48:33.941229 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.944469 kubelet[3467]: E0527 02:48:33.944416 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.944469 kubelet[3467]: W0527 02:48:33.944454 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.944665 kubelet[3467]: E0527 02:48:33.944488 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.945121 kubelet[3467]: E0527 02:48:33.944887 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.945121 kubelet[3467]: W0527 02:48:33.944915 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.945121 kubelet[3467]: E0527 02:48:33.944939 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.946870 kubelet[3467]: E0527 02:48:33.946019 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.946870 kubelet[3467]: W0527 02:48:33.946056 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.946870 kubelet[3467]: E0527 02:48:33.946090 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.948397 kubelet[3467]: E0527 02:48:33.948344 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.948397 kubelet[3467]: W0527 02:48:33.948383 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.948611 kubelet[3467]: E0527 02:48:33.948417 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.949772 kubelet[3467]: E0527 02:48:33.949723 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.949772 kubelet[3467]: W0527 02:48:33.949760 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.950233 kubelet[3467]: E0527 02:48:33.949793 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.951172 kubelet[3467]: E0527 02:48:33.951124 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.951172 kubelet[3467]: W0527 02:48:33.951160 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.951365 kubelet[3467]: E0527 02:48:33.951194 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.953329 kubelet[3467]: E0527 02:48:33.953279 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.953329 kubelet[3467]: W0527 02:48:33.953318 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.953499 kubelet[3467]: E0527 02:48:33.953352 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.955024 kubelet[3467]: E0527 02:48:33.954954 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.955024 kubelet[3467]: W0527 02:48:33.954994 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.955247 kubelet[3467]: E0527 02:48:33.955047 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.956821 kubelet[3467]: E0527 02:48:33.956669 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.956821 kubelet[3467]: W0527 02:48:33.956707 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.956821 kubelet[3467]: E0527 02:48:33.956742 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.958989 kubelet[3467]: E0527 02:48:33.958945 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.958989 kubelet[3467]: W0527 02:48:33.958981 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.960735 kubelet[3467]: E0527 02:48:33.959029 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.961695 kubelet[3467]: E0527 02:48:33.961640 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.961695 kubelet[3467]: W0527 02:48:33.961684 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.961882 kubelet[3467]: E0527 02:48:33.961722 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.964372 kubelet[3467]: E0527 02:48:33.964330 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.964484 kubelet[3467]: W0527 02:48:33.964396 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.964484 kubelet[3467]: E0527 02:48:33.964431 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.967226 kubelet[3467]: E0527 02:48:33.967106 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.967226 kubelet[3467]: W0527 02:48:33.967149 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.967226 kubelet[3467]: E0527 02:48:33.967183 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.968033 kubelet[3467]: E0527 02:48:33.967985 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.968033 kubelet[3467]: W0527 02:48:33.968022 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.968226 kubelet[3467]: E0527 02:48:33.968054 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.968804 kubelet[3467]: E0527 02:48:33.968757 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.969166 kubelet[3467]: W0527 02:48:33.968863 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.969166 kubelet[3467]: E0527 02:48:33.968898 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.969700 kubelet[3467]: E0527 02:48:33.969656 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.970008 kubelet[3467]: W0527 02:48:33.969691 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.970008 kubelet[3467]: E0527 02:48:33.969963 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.970663 kubelet[3467]: E0527 02:48:33.970617 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.970663 kubelet[3467]: W0527 02:48:33.970653 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.971736 kubelet[3467]: E0527 02:48:33.970682 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.972183 kubelet[3467]: E0527 02:48:33.972133 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.972183 kubelet[3467]: W0527 02:48:33.972171 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.972362 kubelet[3467]: E0527 02:48:33.972203 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.972362 kubelet[3467]: I0527 02:48:33.972245 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4vp\" (UniqueName: \"kubernetes.io/projected/5e891259-bbea-4c4e-9cf8-bdfb46083aeb-kube-api-access-ms4vp\") pod \"csi-node-driver-sx96z\" (UID: \"5e891259-bbea-4c4e-9cf8-bdfb46083aeb\") " pod="calico-system/csi-node-driver-sx96z" May 27 02:48:33.973346 systemd[1]: Started cri-containerd-9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350.scope - libcontainer container 9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350. May 27 02:48:33.976395 kubelet[3467]: E0527 02:48:33.976185 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.976395 kubelet[3467]: W0527 02:48:33.976226 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.976395 kubelet[3467]: E0527 02:48:33.976260 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.978089 kubelet[3467]: I0527 02:48:33.977959 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e891259-bbea-4c4e-9cf8-bdfb46083aeb-kubelet-dir\") pod \"csi-node-driver-sx96z\" (UID: \"5e891259-bbea-4c4e-9cf8-bdfb46083aeb\") " pod="calico-system/csi-node-driver-sx96z" May 27 02:48:33.979240 kubelet[3467]: E0527 02:48:33.978936 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.979240 kubelet[3467]: W0527 02:48:33.978975 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.979240 kubelet[3467]: E0527 02:48:33.979028 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.981783 kubelet[3467]: E0527 02:48:33.981731 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.981783 kubelet[3467]: W0527 02:48:33.981768 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.982059 kubelet[3467]: E0527 02:48:33.981802 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.983935 kubelet[3467]: E0527 02:48:33.983877 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.984208 kubelet[3467]: W0527 02:48:33.983924 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.984208 kubelet[3467]: E0527 02:48:33.984195 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.987524 kubelet[3467]: I0527 02:48:33.987403 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e891259-bbea-4c4e-9cf8-bdfb46083aeb-socket-dir\") pod \"csi-node-driver-sx96z\" (UID: \"5e891259-bbea-4c4e-9cf8-bdfb46083aeb\") " pod="calico-system/csi-node-driver-sx96z" May 27 02:48:33.989597 kubelet[3467]: E0527 02:48:33.988412 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.989597 kubelet[3467]: W0527 02:48:33.988450 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.989597 kubelet[3467]: E0527 02:48:33.988633 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.991683 kubelet[3467]: E0527 02:48:33.991587 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.991683 kubelet[3467]: W0527 02:48:33.991624 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.991683 kubelet[3467]: E0527 02:48:33.991660 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.994230 kubelet[3467]: E0527 02:48:33.994179 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.994230 kubelet[3467]: W0527 02:48:33.994218 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.994881 kubelet[3467]: E0527 02:48:33.994253 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.994881 kubelet[3467]: I0527 02:48:33.994708 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e891259-bbea-4c4e-9cf8-bdfb46083aeb-registration-dir\") pod \"csi-node-driver-sx96z\" (UID: \"5e891259-bbea-4c4e-9cf8-bdfb46083aeb\") " pod="calico-system/csi-node-driver-sx96z" May 27 02:48:33.995915 kubelet[3467]: E0527 02:48:33.995817 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.995915 kubelet[3467]: W0527 02:48:33.995904 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.996119 kubelet[3467]: E0527 02:48:33.995938 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:33.998996 kubelet[3467]: E0527 02:48:33.998943 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:33.998996 kubelet[3467]: W0527 02:48:33.998986 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:33.998996 kubelet[3467]: E0527 02:48:33.999040 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.000652 kubelet[3467]: E0527 02:48:34.000609 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.000951 kubelet[3467]: W0527 02:48:34.000644 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.000951 kubelet[3467]: E0527 02:48:34.000711 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.000951 kubelet[3467]: I0527 02:48:34.000777 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e891259-bbea-4c4e-9cf8-bdfb46083aeb-varrun\") pod \"csi-node-driver-sx96z\" (UID: \"5e891259-bbea-4c4e-9cf8-bdfb46083aeb\") " pod="calico-system/csi-node-driver-sx96z" May 27 02:48:34.001413 kubelet[3467]: E0527 02:48:34.001315 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.001413 kubelet[3467]: W0527 02:48:34.001389 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.001676 kubelet[3467]: E0527 02:48:34.001417 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.002065 kubelet[3467]: E0527 02:48:34.002013 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.002157 kubelet[3467]: W0527 02:48:34.002076 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.002157 kubelet[3467]: E0527 02:48:34.002104 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.003153 kubelet[3467]: E0527 02:48:34.003096 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.003153 kubelet[3467]: W0527 02:48:34.003136 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.003945 kubelet[3467]: E0527 02:48:34.003168 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.005134 kubelet[3467]: E0527 02:48:34.005077 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.005134 kubelet[3467]: W0527 02:48:34.005117 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.005295 kubelet[3467]: E0527 02:48:34.005150 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.079515 containerd[2001]: time="2025-05-27T02:48:34.079397648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ffb56554b-9zhsp,Uid:6254a01e-8e0c-43d2-bfd6-b8b9139be1f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a40c470da2a03f690f3bf99d2534f92f39fdafbc61454d5e0144e25c9f638aa\"" May 27 02:48:34.085762 containerd[2001]: time="2025-05-27T02:48:34.085689764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 02:48:34.103131 kubelet[3467]: E0527 02:48:34.103085 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.103131 kubelet[3467]: W0527 02:48:34.103122 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.103432 kubelet[3467]: E0527 02:48:34.103183 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.104399 kubelet[3467]: E0527 02:48:34.104354 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.104399 kubelet[3467]: W0527 02:48:34.104387 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.104584 kubelet[3467]: E0527 02:48:34.104419 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.107609 kubelet[3467]: E0527 02:48:34.107538 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.107866 kubelet[3467]: W0527 02:48:34.107725 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.108182 kubelet[3467]: E0527 02:48:34.107770 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.109953 kubelet[3467]: E0527 02:48:34.109788 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.110339 kubelet[3467]: W0527 02:48:34.109821 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.110339 kubelet[3467]: E0527 02:48:34.110163 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.113330 kubelet[3467]: E0527 02:48:34.113271 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.113330 kubelet[3467]: W0527 02:48:34.113313 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.113880 kubelet[3467]: E0527 02:48:34.113348 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.113880 kubelet[3467]: E0527 02:48:34.113678 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.113880 kubelet[3467]: W0527 02:48:34.113695 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.113880 kubelet[3467]: E0527 02:48:34.113715 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.114087 kubelet[3467]: E0527 02:48:34.114000 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.114087 kubelet[3467]: W0527 02:48:34.114015 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.114087 kubelet[3467]: E0527 02:48:34.114034 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.115419 kubelet[3467]: E0527 02:48:34.115324 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.115419 kubelet[3467]: W0527 02:48:34.115358 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.115419 kubelet[3467]: E0527 02:48:34.115388 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.116460 kubelet[3467]: E0527 02:48:34.116271 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.116460 kubelet[3467]: W0527 02:48:34.116307 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.116460 kubelet[3467]: E0527 02:48:34.116338 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.117858 kubelet[3467]: E0527 02:48:34.117166 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.117858 kubelet[3467]: W0527 02:48:34.117199 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.117858 kubelet[3467]: E0527 02:48:34.117228 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.118366 kubelet[3467]: E0527 02:48:34.117911 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.118366 kubelet[3467]: W0527 02:48:34.117933 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.118366 kubelet[3467]: E0527 02:48:34.117959 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.119261 kubelet[3467]: E0527 02:48:34.118984 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.119261 kubelet[3467]: W0527 02:48:34.119054 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.119261 kubelet[3467]: E0527 02:48:34.119084 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.120284 kubelet[3467]: E0527 02:48:34.120189 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.120284 kubelet[3467]: W0527 02:48:34.120281 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.120431 kubelet[3467]: E0527 02:48:34.120367 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.120985 kubelet[3467]: E0527 02:48:34.120917 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.120985 kubelet[3467]: W0527 02:48:34.120976 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.121112 kubelet[3467]: E0527 02:48:34.121002 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.121848 kubelet[3467]: E0527 02:48:34.121607 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.121848 kubelet[3467]: W0527 02:48:34.121639 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.121848 kubelet[3467]: E0527 02:48:34.121666 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.122695 kubelet[3467]: E0527 02:48:34.122404 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.122695 kubelet[3467]: W0527 02:48:34.122437 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.122868 kubelet[3467]: E0527 02:48:34.122759 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.123545 kubelet[3467]: E0527 02:48:34.123300 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.123545 kubelet[3467]: W0527 02:48:34.123333 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.123545 kubelet[3467]: E0527 02:48:34.123361 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.124654 kubelet[3467]: E0527 02:48:34.123853 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.124654 kubelet[3467]: W0527 02:48:34.123876 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.124654 kubelet[3467]: E0527 02:48:34.123900 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.124654 kubelet[3467]: E0527 02:48:34.124286 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.124654 kubelet[3467]: W0527 02:48:34.124318 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.124654 kubelet[3467]: E0527 02:48:34.124367 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.125905 kubelet[3467]: E0527 02:48:34.124720 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.125905 kubelet[3467]: W0527 02:48:34.124736 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.125905 kubelet[3467]: E0527 02:48:34.124755 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.125905 kubelet[3467]: E0527 02:48:34.125062 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.125905 kubelet[3467]: W0527 02:48:34.125078 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.125905 kubelet[3467]: E0527 02:48:34.125098 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.125905 kubelet[3467]: E0527 02:48:34.125388 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.125905 kubelet[3467]: W0527 02:48:34.125405 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.125905 kubelet[3467]: E0527 02:48:34.125423 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.125905 kubelet[3467]: E0527 02:48:34.125739 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.127487 kubelet[3467]: W0527 02:48:34.125755 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.127487 kubelet[3467]: E0527 02:48:34.125774 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.127487 kubelet[3467]: E0527 02:48:34.126217 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.127487 kubelet[3467]: W0527 02:48:34.126235 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.127487 kubelet[3467]: E0527 02:48:34.126257 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.127487 kubelet[3467]: E0527 02:48:34.127080 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.127487 kubelet[3467]: W0527 02:48:34.127103 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.127487 kubelet[3467]: E0527 02:48:34.127132 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.151940 kubelet[3467]: E0527 02:48:34.151886 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:34.151940 kubelet[3467]: W0527 02:48:34.151925 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:34.152161 kubelet[3467]: E0527 02:48:34.151959 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:34.207959 containerd[2001]: time="2025-05-27T02:48:34.207742148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjfsz,Uid:5403733d-f853-41c7-981a-831bad61f688,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350\"" May 27 02:48:35.298341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603631524.mount: Deactivated successfully. May 27 02:48:35.817188 kubelet[3467]: E0527 02:48:35.817131 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx96z" podUID="5e891259-bbea-4c4e-9cf8-bdfb46083aeb" May 27 02:48:36.388800 containerd[2001]: time="2025-05-27T02:48:36.388563263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:36.389929 containerd[2001]: time="2025-05-27T02:48:36.389850683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 27 02:48:36.391480 containerd[2001]: time="2025-05-27T02:48:36.391384703Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:36.395729 containerd[2001]: time="2025-05-27T02:48:36.395648327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:36.396971 containerd[2001]: time="2025-05-27T02:48:36.396912563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 2.310851339s" May 27 02:48:36.397061 containerd[2001]: time="2025-05-27T02:48:36.396971363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 27 02:48:36.399253 containerd[2001]: time="2025-05-27T02:48:36.398899907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 02:48:36.428971 containerd[2001]: time="2025-05-27T02:48:36.428921999Z" level=info msg="CreateContainer within sandbox \"4a40c470da2a03f690f3bf99d2534f92f39fdafbc61454d5e0144e25c9f638aa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 02:48:36.449484 containerd[2001]: time="2025-05-27T02:48:36.447928283Z" level=info msg="Container c85a8c599a9ae9ca4f85e2254be9c8b085955fef4e8ea767788808c9bfdb9ddb: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:36.458195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221607703.mount: Deactivated successfully. May 27 02:48:36.468873 containerd[2001]: time="2025-05-27T02:48:36.468789671Z" level=info msg="CreateContainer within sandbox \"4a40c470da2a03f690f3bf99d2534f92f39fdafbc61454d5e0144e25c9f638aa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c85a8c599a9ae9ca4f85e2254be9c8b085955fef4e8ea767788808c9bfdb9ddb\"" May 27 02:48:36.471788 containerd[2001]: time="2025-05-27T02:48:36.471445247Z" level=info msg="StartContainer for \"c85a8c599a9ae9ca4f85e2254be9c8b085955fef4e8ea767788808c9bfdb9ddb\"" May 27 02:48:36.478866 containerd[2001]: time="2025-05-27T02:48:36.477361187Z" level=info msg="connecting to shim c85a8c599a9ae9ca4f85e2254be9c8b085955fef4e8ea767788808c9bfdb9ddb" address="unix:///run/containerd/s/6ce86b31e837912c513c8ba6b19c684d9baee24fd2676499206c031ead8150b3" protocol=ttrpc version=3 May 27 02:48:36.522158 systemd[1]: Started cri-containerd-c85a8c599a9ae9ca4f85e2254be9c8b085955fef4e8ea767788808c9bfdb9ddb.scope - libcontainer container c85a8c599a9ae9ca4f85e2254be9c8b085955fef4e8ea767788808c9bfdb9ddb. May 27 02:48:36.607888 containerd[2001]: time="2025-05-27T02:48:36.607643100Z" level=info msg="StartContainer for \"c85a8c599a9ae9ca4f85e2254be9c8b085955fef4e8ea767788808c9bfdb9ddb\" returns successfully" May 27 02:48:37.090384 kubelet[3467]: E0527 02:48:37.090262 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.090384 kubelet[3467]: W0527 02:48:37.090338 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.091251 kubelet[3467]: E0527 02:48:37.090373 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.091987 kubelet[3467]: E0527 02:48:37.091934 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.091987 kubelet[3467]: W0527 02:48:37.091976 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.092448 kubelet[3467]: E0527 02:48:37.092008 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.092448 kubelet[3467]: E0527 02:48:37.092388 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.092448 kubelet[3467]: W0527 02:48:37.092408 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.092448 kubelet[3467]: E0527 02:48:37.092428 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.093335 kubelet[3467]: E0527 02:48:37.093292 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.093335 kubelet[3467]: W0527 02:48:37.093327 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.093335 kubelet[3467]: E0527 02:48:37.093358 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.093784 kubelet[3467]: E0527 02:48:37.093749 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.093784 kubelet[3467]: W0527 02:48:37.093777 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.094210 kubelet[3467]: E0527 02:48:37.093800 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.094210 kubelet[3467]: E0527 02:48:37.094107 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.094210 kubelet[3467]: W0527 02:48:37.094128 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.094210 kubelet[3467]: E0527 02:48:37.094148 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.094210 kubelet[3467]: E0527 02:48:37.094430 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.094210 kubelet[3467]: W0527 02:48:37.094446 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.094210 kubelet[3467]: E0527 02:48:37.094466 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.095606 kubelet[3467]: E0527 02:48:37.095561 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.095606 kubelet[3467]: W0527 02:48:37.095600 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.095753 kubelet[3467]: E0527 02:48:37.095632 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.096860 kubelet[3467]: E0527 02:48:37.096395 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.096860 kubelet[3467]: W0527 02:48:37.096431 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.096860 kubelet[3467]: E0527 02:48:37.096463 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.097236 kubelet[3467]: E0527 02:48:37.097189 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.097236 kubelet[3467]: W0527 02:48:37.097227 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.097361 kubelet[3467]: E0527 02:48:37.097256 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.097722 kubelet[3467]: E0527 02:48:37.097675 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.097722 kubelet[3467]: W0527 02:48:37.097704 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.098598 kubelet[3467]: E0527 02:48:37.097728 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.098598 kubelet[3467]: E0527 02:48:37.098559 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.098598 kubelet[3467]: W0527 02:48:37.098591 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.098775 kubelet[3467]: E0527 02:48:37.098621 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.100932 kubelet[3467]: E0527 02:48:37.100888 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.100932 kubelet[3467]: W0527 02:48:37.100921 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.102031 kubelet[3467]: E0527 02:48:37.100951 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.102031 kubelet[3467]: E0527 02:48:37.101286 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.102031 kubelet[3467]: W0527 02:48:37.101303 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.102031 kubelet[3467]: E0527 02:48:37.101324 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.103243 kubelet[3467]: E0527 02:48:37.103192 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.103243 kubelet[3467]: W0527 02:48:37.103226 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.103452 kubelet[3467]: E0527 02:48:37.103259 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.142915 kubelet[3467]: E0527 02:48:37.142865 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.142915 kubelet[3467]: W0527 02:48:37.142905 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.143309 kubelet[3467]: E0527 02:48:37.142935 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.143510 kubelet[3467]: E0527 02:48:37.143474 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.143510 kubelet[3467]: W0527 02:48:37.143504 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.143685 kubelet[3467]: E0527 02:48:37.143530 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.144083 kubelet[3467]: E0527 02:48:37.144039 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.144083 kubelet[3467]: W0527 02:48:37.144073 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.144295 kubelet[3467]: E0527 02:48:37.144100 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.145523 kubelet[3467]: E0527 02:48:37.145473 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.145523 kubelet[3467]: W0527 02:48:37.145511 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.145918 kubelet[3467]: E0527 02:48:37.145543 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.146134 kubelet[3467]: E0527 02:48:37.146098 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.146134 kubelet[3467]: W0527 02:48:37.146129 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.146563 kubelet[3467]: E0527 02:48:37.146155 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.147092 kubelet[3467]: E0527 02:48:37.147045 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.147092 kubelet[3467]: W0527 02:48:37.147082 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.147328 kubelet[3467]: E0527 02:48:37.147113 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.149059 kubelet[3467]: E0527 02:48:37.149005 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.149059 kubelet[3467]: W0527 02:48:37.149047 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.149312 kubelet[3467]: E0527 02:48:37.149081 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.149721 kubelet[3467]: E0527 02:48:37.149599 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.149721 kubelet[3467]: W0527 02:48:37.149629 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.149721 kubelet[3467]: E0527 02:48:37.149655 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.151357 kubelet[3467]: E0527 02:48:37.151313 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.151357 kubelet[3467]: W0527 02:48:37.151356 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.151695 kubelet[3467]: E0527 02:48:37.151390 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.152005 kubelet[3467]: E0527 02:48:37.151968 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.152005 kubelet[3467]: W0527 02:48:37.151998 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.152155 kubelet[3467]: E0527 02:48:37.152025 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.152815 kubelet[3467]: E0527 02:48:37.152774 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.152815 kubelet[3467]: W0527 02:48:37.152807 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.153000 kubelet[3467]: E0527 02:48:37.152864 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.155092 kubelet[3467]: E0527 02:48:37.155044 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.155092 kubelet[3467]: W0527 02:48:37.155081 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.155304 kubelet[3467]: E0527 02:48:37.155114 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.156808 kubelet[3467]: E0527 02:48:37.156757 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.156808 kubelet[3467]: W0527 02:48:37.156797 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.157966 kubelet[3467]: E0527 02:48:37.157894 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.158398 kubelet[3467]: E0527 02:48:37.158361 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.158398 kubelet[3467]: W0527 02:48:37.158393 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.158564 kubelet[3467]: E0527 02:48:37.158422 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.158948 kubelet[3467]: E0527 02:48:37.158911 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.158948 kubelet[3467]: W0527 02:48:37.158942 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.159116 kubelet[3467]: E0527 02:48:37.158969 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.161734 kubelet[3467]: E0527 02:48:37.161687 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.161734 kubelet[3467]: W0527 02:48:37.161725 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.162167 kubelet[3467]: E0527 02:48:37.161757 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.167043 kubelet[3467]: E0527 02:48:37.166964 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.167043 kubelet[3467]: W0527 02:48:37.167023 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.167246 kubelet[3467]: E0527 02:48:37.167076 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.171113 kubelet[3467]: E0527 02:48:37.171054 3467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:48:37.171113 kubelet[3467]: W0527 02:48:37.171099 3467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:48:37.171332 kubelet[3467]: E0527 02:48:37.171134 3467 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:48:37.268180 kubelet[3467]: I0527 02:48:37.268039 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5ffb56554b-9zhsp" podStartSLOduration=1.95396994 podStartE2EDuration="4.268017263s" podCreationTimestamp="2025-05-27 02:48:33 +0000 UTC" firstStartedPulling="2025-05-27 02:48:34.084164408 +0000 UTC m=+28.544702603" lastFinishedPulling="2025-05-27 02:48:36.398211719 +0000 UTC m=+30.858749926" observedRunningTime="2025-05-27 02:48:37.267205631 +0000 UTC m=+31.727743850" watchObservedRunningTime="2025-05-27 02:48:37.268017263 +0000 UTC m=+31.728555494" May 27 02:48:37.685267 containerd[2001]: time="2025-05-27T02:48:37.685184077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:37.688074 containerd[2001]: time="2025-05-27T02:48:37.687999482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 27 02:48:37.690415 containerd[2001]: time="2025-05-27T02:48:37.690341306Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:37.694568 containerd[2001]: time="2025-05-27T02:48:37.694468382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:37.695940 containerd[2001]: time="2025-05-27T02:48:37.695641634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.296680923s" May 27 02:48:37.695940 containerd[2001]: time="2025-05-27T02:48:37.695698706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 27 02:48:37.704307 containerd[2001]: time="2025-05-27T02:48:37.704246582Z" level=info msg="CreateContainer within sandbox \"9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 02:48:37.725065 containerd[2001]: time="2025-05-27T02:48:37.723183074Z" level=info msg="Container 224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:37.731287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493281388.mount: Deactivated successfully. May 27 02:48:37.749332 containerd[2001]: time="2025-05-27T02:48:37.749146466Z" level=info msg="CreateContainer within sandbox \"9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d\"" May 27 02:48:37.751497 containerd[2001]: time="2025-05-27T02:48:37.751430126Z" level=info msg="StartContainer for \"224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d\"" May 27 02:48:37.756777 containerd[2001]: time="2025-05-27T02:48:37.756714650Z" level=info msg="connecting to shim 224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d" address="unix:///run/containerd/s/47039e623da86c20de7fb7c021ccb032cdcde9b07474aad0d13d7c28b4485ee9" protocol=ttrpc version=3 May 27 02:48:37.800144 systemd[1]: Started cri-containerd-224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d.scope - libcontainer container 224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d. May 27 02:48:37.814880 kubelet[3467]: E0527 02:48:37.813875 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx96z" podUID="5e891259-bbea-4c4e-9cf8-bdfb46083aeb" May 27 02:48:37.886298 containerd[2001]: time="2025-05-27T02:48:37.886173722Z" level=info msg="StartContainer for \"224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d\" returns successfully" May 27 02:48:37.907561 systemd[1]: cri-containerd-224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d.scope: Deactivated successfully. May 27 02:48:37.914974 containerd[2001]: time="2025-05-27T02:48:37.914918307Z" level=info msg="received exit event container_id:\"224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d\" id:\"224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d\" pid:4158 exited_at:{seconds:1748314117 nanos:913974543}" May 27 02:48:37.915276 containerd[2001]: time="2025-05-27T02:48:37.915185691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d\" id:\"224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d\" pid:4158 exited_at:{seconds:1748314117 nanos:913974543}" May 27 02:48:37.968028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-224e473b5db5c1b6c2d81aecb0fec9c4afa3b48f47d4f7464a769da26907d81d-rootfs.mount: Deactivated successfully. May 27 02:48:38.039304 kubelet[3467]: I0527 02:48:38.038903 3467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:48:39.049680 containerd[2001]: time="2025-05-27T02:48:39.049626708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 02:48:39.814814 kubelet[3467]: E0527 02:48:39.814129 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx96z" podUID="5e891259-bbea-4c4e-9cf8-bdfb46083aeb" May 27 02:48:41.814532 kubelet[3467]: E0527 02:48:41.813989 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sx96z" podUID="5e891259-bbea-4c4e-9cf8-bdfb46083aeb" May 27 02:48:42.045909 containerd[2001]: time="2025-05-27T02:48:42.045181335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:42.049131 containerd[2001]: time="2025-05-27T02:48:42.049005963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 27 02:48:42.051483 containerd[2001]: time="2025-05-27T02:48:42.051403971Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:42.056686 containerd[2001]: time="2025-05-27T02:48:42.056576067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:42.057925 containerd[2001]: time="2025-05-27T02:48:42.057723531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 3.008038251s" May 27 02:48:42.057925 containerd[2001]: time="2025-05-27T02:48:42.057774831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 27 02:48:42.068285 containerd[2001]: time="2025-05-27T02:48:42.067505931Z" level=info msg="CreateContainer within sandbox \"9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 02:48:42.091861 containerd[2001]: time="2025-05-27T02:48:42.091253415Z" level=info msg="Container 839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:42.101732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780737060.mount: Deactivated successfully. May 27 02:48:42.112628 containerd[2001]: time="2025-05-27T02:48:42.112480503Z" level=info msg="CreateContainer within sandbox \"9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d\"" May 27 02:48:42.114077 containerd[2001]: time="2025-05-27T02:48:42.114014703Z" level=info msg="StartContainer for \"839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d\"" May 27 02:48:42.118232 containerd[2001]: time="2025-05-27T02:48:42.117902704Z" level=info msg="connecting to shim 839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d" address="unix:///run/containerd/s/47039e623da86c20de7fb7c021ccb032cdcde9b07474aad0d13d7c28b4485ee9" protocol=ttrpc version=3 May 27 02:48:42.159173 systemd[1]: Started cri-containerd-839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d.scope - libcontainer container 839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d. May 27 02:48:42.243352 containerd[2001]: time="2025-05-27T02:48:42.243177316Z" level=info msg="StartContainer for \"839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d\" returns successfully" May 27 02:48:43.112825 containerd[2001]: time="2025-05-27T02:48:43.112661068Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:48:43.117808 systemd[1]: cri-containerd-839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d.scope: Deactivated successfully. May 27 02:48:43.118946 systemd[1]: cri-containerd-839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d.scope: Consumed 880ms CPU time, 184.6M memory peak, 165.5M written to disk. May 27 02:48:43.121326 containerd[2001]: time="2025-05-27T02:48:43.121254388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d\" id:\"839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d\" pid:4222 exited_at:{seconds:1748314123 nanos:120746908}" May 27 02:48:43.121489 containerd[2001]: time="2025-05-27T02:48:43.121377100Z" level=info msg="received exit event container_id:\"839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d\" id:\"839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d\" pid:4222 exited_at:{seconds:1748314123 nanos:120746908}" May 27 02:48:43.129379 kubelet[3467]: I0527 02:48:43.129317 3467 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 02:48:43.183720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-839778e01db44742c9e1feaf5660da71c391ee70f2d5c47aabeec40f22a3060d-rootfs.mount: Deactivated successfully. May 27 02:48:43.240981 systemd[1]: Created slice kubepods-burstable-pod5fba4d43_bcf3_4cf3_ad99_96050afcbf37.slice - libcontainer container kubepods-burstable-pod5fba4d43_bcf3_4cf3_ad99_96050afcbf37.slice. May 27 02:48:43.295399 systemd[1]: Created slice kubepods-besteffort-pod96dbf7ef_fdad_445b_a963_39e84fe9fd05.slice - libcontainer container kubepods-besteffort-pod96dbf7ef_fdad_445b_a963_39e84fe9fd05.slice. May 27 02:48:43.317212 kubelet[3467]: I0527 02:48:43.317134 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwm48\" (UniqueName: \"kubernetes.io/projected/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-kube-api-access-wwm48\") pod \"whisker-77986c5f5b-hbjwc\" (UID: \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\") " pod="calico-system/whisker-77986c5f5b-hbjwc" May 27 02:48:43.317212 kubelet[3467]: I0527 02:48:43.317204 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjhr\" (UniqueName: \"kubernetes.io/projected/5fba4d43-bcf3-4cf3-ad99-96050afcbf37-kube-api-access-vtjhr\") pod \"coredns-674b8bbfcf-gcs7c\" (UID: \"5fba4d43-bcf3-4cf3-ad99-96050afcbf37\") " pod="kube-system/coredns-674b8bbfcf-gcs7c" May 27 02:48:43.317469 kubelet[3467]: I0527 02:48:43.317253 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-backend-key-pair\") pod \"whisker-77986c5f5b-hbjwc\" (UID: \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\") " pod="calico-system/whisker-77986c5f5b-hbjwc" May 27 02:48:43.317469 kubelet[3467]: I0527 02:48:43.317304 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96dbf7ef-fdad-445b-a963-39e84fe9fd05-calico-apiserver-certs\") pod \"calico-apiserver-95d4cb8cc-kn5fg\" (UID: \"96dbf7ef-fdad-445b-a963-39e84fe9fd05\") " pod="calico-apiserver/calico-apiserver-95d4cb8cc-kn5fg" May 27 02:48:43.317469 kubelet[3467]: I0527 02:48:43.317347 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fba4d43-bcf3-4cf3-ad99-96050afcbf37-config-volume\") pod \"coredns-674b8bbfcf-gcs7c\" (UID: \"5fba4d43-bcf3-4cf3-ad99-96050afcbf37\") " pod="kube-system/coredns-674b8bbfcf-gcs7c" May 27 02:48:43.317469 kubelet[3467]: I0527 02:48:43.317385 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpqr8\" (UniqueName: \"kubernetes.io/projected/96dbf7ef-fdad-445b-a963-39e84fe9fd05-kube-api-access-mpqr8\") pod \"calico-apiserver-95d4cb8cc-kn5fg\" (UID: \"96dbf7ef-fdad-445b-a963-39e84fe9fd05\") " pod="calico-apiserver/calico-apiserver-95d4cb8cc-kn5fg" May 27 02:48:43.317469 kubelet[3467]: I0527 02:48:43.317426 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-ca-bundle\") pod \"whisker-77986c5f5b-hbjwc\" (UID: \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\") " pod="calico-system/whisker-77986c5f5b-hbjwc" May 27 02:48:43.330477 systemd[1]: Created slice kubepods-burstable-pod4f9fd9e1_0322_4cb7_986b_0344d90fe242.slice - libcontainer container kubepods-burstable-pod4f9fd9e1_0322_4cb7_986b_0344d90fe242.slice. May 27 02:48:43.355477 systemd[1]: Created slice kubepods-besteffort-poda4dbad33_e282_4ddd_bf2d_f118dac65c55.slice - libcontainer container kubepods-besteffort-poda4dbad33_e282_4ddd_bf2d_f118dac65c55.slice. May 27 02:48:43.379304 systemd[1]: Created slice kubepods-besteffort-podbe3e00a4_5c9b_4f45_8c5d_bd5a86ce209c.slice - libcontainer container kubepods-besteffort-podbe3e00a4_5c9b_4f45_8c5d_bd5a86ce209c.slice. May 27 02:48:43.398358 systemd[1]: Created slice kubepods-besteffort-pod55e730ec_46ee_429f_81a6_a70c10e9a76c.slice - libcontainer container kubepods-besteffort-pod55e730ec_46ee_429f_81a6_a70c10e9a76c.slice. May 27 02:48:43.413802 systemd[1]: Created slice kubepods-besteffort-pod9c211f25_af7a_4f14_a1ea_1d4a59437dbe.slice - libcontainer container kubepods-besteffort-pod9c211f25_af7a_4f14_a1ea_1d4a59437dbe.slice. May 27 02:48:43.418270 kubelet[3467]: I0527 02:48:43.418147 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/55e730ec-46ee-429f-81a6-a70c10e9a76c-calico-apiserver-certs\") pod \"calico-apiserver-95d4cb8cc-lhfd7\" (UID: \"55e730ec-46ee-429f-81a6-a70c10e9a76c\") " pod="calico-apiserver/calico-apiserver-95d4cb8cc-lhfd7" May 27 02:48:43.418576 kubelet[3467]: I0527 02:48:43.418284 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-4qxmb\" (UID: \"be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c\") " pod="calico-system/goldmane-78d55f7ddc-4qxmb" May 27 02:48:43.418576 kubelet[3467]: I0527 02:48:43.418364 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c-config\") pod \"goldmane-78d55f7ddc-4qxmb\" (UID: \"be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c\") " pod="calico-system/goldmane-78d55f7ddc-4qxmb" May 27 02:48:43.419910 kubelet[3467]: I0527 02:48:43.419071 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f9fd9e1-0322-4cb7-986b-0344d90fe242-config-volume\") pod \"coredns-674b8bbfcf-szq2g\" (UID: \"4f9fd9e1-0322-4cb7-986b-0344d90fe242\") " pod="kube-system/coredns-674b8bbfcf-szq2g" May 27 02:48:43.419910 kubelet[3467]: I0527 02:48:43.419180 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8v2q\" (UniqueName: \"kubernetes.io/projected/a4dbad33-e282-4ddd-bf2d-f118dac65c55-kube-api-access-f8v2q\") pod \"calico-kube-controllers-5979f88f5b-f6zrw\" (UID: \"a4dbad33-e282-4ddd-bf2d-f118dac65c55\") " pod="calico-system/calico-kube-controllers-5979f88f5b-f6zrw" May 27 02:48:43.419910 kubelet[3467]: I0527 02:48:43.419236 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dffn4\" (UniqueName: \"kubernetes.io/projected/55e730ec-46ee-429f-81a6-a70c10e9a76c-kube-api-access-dffn4\") pod \"calico-apiserver-95d4cb8cc-lhfd7\" (UID: \"55e730ec-46ee-429f-81a6-a70c10e9a76c\") " pod="calico-apiserver/calico-apiserver-95d4cb8cc-lhfd7" May 27 02:48:43.419910 kubelet[3467]: I0527 02:48:43.419282 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdldh\" (UniqueName: \"kubernetes.io/projected/be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c-kube-api-access-sdldh\") pod \"goldmane-78d55f7ddc-4qxmb\" (UID: \"be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c\") " pod="calico-system/goldmane-78d55f7ddc-4qxmb" May 27 02:48:43.419910 kubelet[3467]: I0527 02:48:43.419419 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-4qxmb\" (UID: \"be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c\") " pod="calico-system/goldmane-78d55f7ddc-4qxmb" May 27 02:48:43.420639 kubelet[3467]: I0527 02:48:43.419461 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4cz\" (UniqueName: \"kubernetes.io/projected/4f9fd9e1-0322-4cb7-986b-0344d90fe242-kube-api-access-sw4cz\") pod \"coredns-674b8bbfcf-szq2g\" (UID: \"4f9fd9e1-0322-4cb7-986b-0344d90fe242\") " pod="kube-system/coredns-674b8bbfcf-szq2g" May 27 02:48:43.420639 kubelet[3467]: I0527 02:48:43.419565 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4dbad33-e282-4ddd-bf2d-f118dac65c55-tigera-ca-bundle\") pod \"calico-kube-controllers-5979f88f5b-f6zrw\" (UID: \"a4dbad33-e282-4ddd-bf2d-f118dac65c55\") " pod="calico-system/calico-kube-controllers-5979f88f5b-f6zrw" May 27 02:48:43.576974 containerd[2001]: time="2025-05-27T02:48:43.576527551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcs7c,Uid:5fba4d43-bcf3-4cf3-ad99-96050afcbf37,Namespace:kube-system,Attempt:0,}" May 27 02:48:43.624866 containerd[2001]: time="2025-05-27T02:48:43.624791035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-kn5fg,Uid:96dbf7ef-fdad-445b-a963-39e84fe9fd05,Namespace:calico-apiserver,Attempt:0,}" May 27 02:48:43.641899 containerd[2001]: time="2025-05-27T02:48:43.641727295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szq2g,Uid:4f9fd9e1-0322-4cb7-986b-0344d90fe242,Namespace:kube-system,Attempt:0,}" May 27 02:48:43.668111 containerd[2001]: time="2025-05-27T02:48:43.668056651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5979f88f5b-f6zrw,Uid:a4dbad33-e282-4ddd-bf2d-f118dac65c55,Namespace:calico-system,Attempt:0,}" May 27 02:48:43.691281 containerd[2001]: time="2025-05-27T02:48:43.691213639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-4qxmb,Uid:be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c,Namespace:calico-system,Attempt:0,}" May 27 02:48:43.716944 containerd[2001]: time="2025-05-27T02:48:43.716817715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-lhfd7,Uid:55e730ec-46ee-429f-81a6-a70c10e9a76c,Namespace:calico-apiserver,Attempt:0,}" May 27 02:48:43.729213 containerd[2001]: time="2025-05-27T02:48:43.729098096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77986c5f5b-hbjwc,Uid:9c211f25-af7a-4f14-a1ea-1d4a59437dbe,Namespace:calico-system,Attempt:0,}" May 27 02:48:43.827395 systemd[1]: Created slice kubepods-besteffort-pod5e891259_bbea_4c4e_9cf8_bdfb46083aeb.slice - libcontainer container kubepods-besteffort-pod5e891259_bbea_4c4e_9cf8_bdfb46083aeb.slice. May 27 02:48:43.838869 containerd[2001]: time="2025-05-27T02:48:43.838647248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx96z,Uid:5e891259-bbea-4c4e-9cf8-bdfb46083aeb,Namespace:calico-system,Attempt:0,}" May 27 02:48:44.129651 containerd[2001]: time="2025-05-27T02:48:44.129560646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 02:48:44.368872 containerd[2001]: time="2025-05-27T02:48:44.368594227Z" level=error msg="Failed to destroy network for sandbox \"78fed3f1ec971a5fd1bb06d3ed5ad40eb2c2360bb042c987bed80e0e8477dfce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.375163 systemd[1]: run-netns-cni\x2dcf8738f2\x2d40e0\x2ddbe3\x2d2501\x2dcba7ff7c7dd3.mount: Deactivated successfully. May 27 02:48:44.387699 containerd[2001]: time="2025-05-27T02:48:44.387516631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcs7c,Uid:5fba4d43-bcf3-4cf3-ad99-96050afcbf37,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78fed3f1ec971a5fd1bb06d3ed5ad40eb2c2360bb042c987bed80e0e8477dfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.388474 kubelet[3467]: E0527 02:48:44.388396 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78fed3f1ec971a5fd1bb06d3ed5ad40eb2c2360bb042c987bed80e0e8477dfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.389742 kubelet[3467]: E0527 02:48:44.388501 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78fed3f1ec971a5fd1bb06d3ed5ad40eb2c2360bb042c987bed80e0e8477dfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcs7c" May 27 02:48:44.389742 kubelet[3467]: E0527 02:48:44.388537 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78fed3f1ec971a5fd1bb06d3ed5ad40eb2c2360bb042c987bed80e0e8477dfce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gcs7c" May 27 02:48:44.389742 kubelet[3467]: E0527 02:48:44.388617 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gcs7c_kube-system(5fba4d43-bcf3-4cf3-ad99-96050afcbf37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gcs7c_kube-system(5fba4d43-bcf3-4cf3-ad99-96050afcbf37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78fed3f1ec971a5fd1bb06d3ed5ad40eb2c2360bb042c987bed80e0e8477dfce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gcs7c" podUID="5fba4d43-bcf3-4cf3-ad99-96050afcbf37" May 27 02:48:44.403123 containerd[2001]: time="2025-05-27T02:48:44.402788851Z" level=error msg="Failed to destroy network for sandbox \"1b18cac95fbc25d42eb5353d98641d59828cd727dc39bef2cbad00ee72b81aa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.407867 containerd[2001]: time="2025-05-27T02:48:44.407720479Z" level=error msg="Failed to destroy network for sandbox \"669f4f34d141860e66557bb4d527a263a22af46fef9f36c2de574ff1dc64c958\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.409599 systemd[1]: run-netns-cni\x2d955095a1\x2d992d\x2df0c9\x2dbfef\x2d3e8655689b93.mount: Deactivated successfully. May 27 02:48:44.417798 containerd[2001]: time="2025-05-27T02:48:44.417360967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-lhfd7,Uid:55e730ec-46ee-429f-81a6-a70c10e9a76c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b18cac95fbc25d42eb5353d98641d59828cd727dc39bef2cbad00ee72b81aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.419612 kubelet[3467]: E0527 02:48:44.418056 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b18cac95fbc25d42eb5353d98641d59828cd727dc39bef2cbad00ee72b81aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.419612 kubelet[3467]: E0527 02:48:44.418194 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b18cac95fbc25d42eb5353d98641d59828cd727dc39bef2cbad00ee72b81aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d4cb8cc-lhfd7" May 27 02:48:44.419612 kubelet[3467]: E0527 02:48:44.418231 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b18cac95fbc25d42eb5353d98641d59828cd727dc39bef2cbad00ee72b81aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d4cb8cc-lhfd7" May 27 02:48:44.418344 systemd[1]: run-netns-cni\x2dbcc49e74\x2da0bf\x2d394e\x2d28b0\x2de73e4a6a2f9a.mount: Deactivated successfully. May 27 02:48:44.422050 kubelet[3467]: E0527 02:48:44.418723 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95d4cb8cc-lhfd7_calico-apiserver(55e730ec-46ee-429f-81a6-a70c10e9a76c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95d4cb8cc-lhfd7_calico-apiserver(55e730ec-46ee-429f-81a6-a70c10e9a76c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b18cac95fbc25d42eb5353d98641d59828cd727dc39bef2cbad00ee72b81aa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95d4cb8cc-lhfd7" podUID="55e730ec-46ee-429f-81a6-a70c10e9a76c" May 27 02:48:44.426887 containerd[2001]: time="2025-05-27T02:48:44.425880247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5979f88f5b-f6zrw,Uid:a4dbad33-e282-4ddd-bf2d-f118dac65c55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"669f4f34d141860e66557bb4d527a263a22af46fef9f36c2de574ff1dc64c958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.427603 kubelet[3467]: E0527 02:48:44.427542 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669f4f34d141860e66557bb4d527a263a22af46fef9f36c2de574ff1dc64c958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.427705 kubelet[3467]: E0527 02:48:44.427641 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669f4f34d141860e66557bb4d527a263a22af46fef9f36c2de574ff1dc64c958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5979f88f5b-f6zrw" May 27 02:48:44.427892 kubelet[3467]: E0527 02:48:44.427815 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669f4f34d141860e66557bb4d527a263a22af46fef9f36c2de574ff1dc64c958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5979f88f5b-f6zrw" May 27 02:48:44.428256 kubelet[3467]: E0527 02:48:44.428139 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5979f88f5b-f6zrw_calico-system(a4dbad33-e282-4ddd-bf2d-f118dac65c55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5979f88f5b-f6zrw_calico-system(a4dbad33-e282-4ddd-bf2d-f118dac65c55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"669f4f34d141860e66557bb4d527a263a22af46fef9f36c2de574ff1dc64c958\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5979f88f5b-f6zrw" podUID="a4dbad33-e282-4ddd-bf2d-f118dac65c55" May 27 02:48:44.431782 containerd[2001]: time="2025-05-27T02:48:44.431201551Z" level=error msg="Failed to destroy network for sandbox \"dd1286e473fa1a23721f14507134b5c999fb6cd0145eb93791a76c3283519dfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.438369 systemd[1]: run-netns-cni\x2de4b5d81f\x2d4f35\x2d22d6\x2d38ca\x2df0a045d9b51c.mount: Deactivated successfully. May 27 02:48:44.442414 containerd[2001]: time="2025-05-27T02:48:44.441131251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-4qxmb,Uid:be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1286e473fa1a23721f14507134b5c999fb6cd0145eb93791a76c3283519dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.442582 kubelet[3467]: E0527 02:48:44.442131 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1286e473fa1a23721f14507134b5c999fb6cd0145eb93791a76c3283519dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.442582 kubelet[3467]: E0527 02:48:44.442446 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1286e473fa1a23721f14507134b5c999fb6cd0145eb93791a76c3283519dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-4qxmb" May 27 02:48:44.442582 kubelet[3467]: E0527 02:48:44.442489 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd1286e473fa1a23721f14507134b5c999fb6cd0145eb93791a76c3283519dfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-4qxmb" May 27 02:48:44.442874 kubelet[3467]: E0527 02:48:44.442667 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-4qxmb_calico-system(be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-4qxmb_calico-system(be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd1286e473fa1a23721f14507134b5c999fb6cd0145eb93791a76c3283519dfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:48:44.448882 containerd[2001]: time="2025-05-27T02:48:44.448724011Z" level=error msg="Failed to destroy network for sandbox \"75d31eaa5dae75da0ced6f3a50539200f648a0cf40915bf757396915819be741\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.452319 containerd[2001]: time="2025-05-27T02:48:44.452110915Z" level=error msg="Failed to destroy network for sandbox \"119456536b29cea58f8f7fb2e9d1f4fdeb9b812e2f1c5474650d48c408aada8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.454538 containerd[2001]: time="2025-05-27T02:48:44.454456951Z" level=error msg="Failed to destroy network for sandbox \"17b700bbf3f22ad6ceb3552a78c7714948149b4fc32321763a8511365f862f99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.456659 containerd[2001]: time="2025-05-27T02:48:44.456535447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szq2g,Uid:4f9fd9e1-0322-4cb7-986b-0344d90fe242,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"119456536b29cea58f8f7fb2e9d1f4fdeb9b812e2f1c5474650d48c408aada8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.457622 kubelet[3467]: E0527 02:48:44.456983 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119456536b29cea58f8f7fb2e9d1f4fdeb9b812e2f1c5474650d48c408aada8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.457622 kubelet[3467]: E0527 02:48:44.457058 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119456536b29cea58f8f7fb2e9d1f4fdeb9b812e2f1c5474650d48c408aada8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-szq2g" May 27 02:48:44.457622 kubelet[3467]: E0527 02:48:44.457102 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"119456536b29cea58f8f7fb2e9d1f4fdeb9b812e2f1c5474650d48c408aada8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-szq2g" May 27 02:48:44.457905 kubelet[3467]: E0527 02:48:44.457238 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-szq2g_kube-system(4f9fd9e1-0322-4cb7-986b-0344d90fe242)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-szq2g_kube-system(4f9fd9e1-0322-4cb7-986b-0344d90fe242)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"119456536b29cea58f8f7fb2e9d1f4fdeb9b812e2f1c5474650d48c408aada8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-szq2g" podUID="4f9fd9e1-0322-4cb7-986b-0344d90fe242" May 27 02:48:44.459085 containerd[2001]: time="2025-05-27T02:48:44.458673079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77986c5f5b-hbjwc,Uid:9c211f25-af7a-4f14-a1ea-1d4a59437dbe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d31eaa5dae75da0ced6f3a50539200f648a0cf40915bf757396915819be741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.460392 kubelet[3467]: E0527 02:48:44.460272 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d31eaa5dae75da0ced6f3a50539200f648a0cf40915bf757396915819be741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.460392 kubelet[3467]: E0527 02:48:44.460355 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d31eaa5dae75da0ced6f3a50539200f648a0cf40915bf757396915819be741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77986c5f5b-hbjwc" May 27 02:48:44.460392 kubelet[3467]: E0527 02:48:44.460390 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75d31eaa5dae75da0ced6f3a50539200f648a0cf40915bf757396915819be741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77986c5f5b-hbjwc" May 27 02:48:44.460930 kubelet[3467]: E0527 02:48:44.460464 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77986c5f5b-hbjwc_calico-system(9c211f25-af7a-4f14-a1ea-1d4a59437dbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77986c5f5b-hbjwc_calico-system(9c211f25-af7a-4f14-a1ea-1d4a59437dbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75d31eaa5dae75da0ced6f3a50539200f648a0cf40915bf757396915819be741\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77986c5f5b-hbjwc" podUID="9c211f25-af7a-4f14-a1ea-1d4a59437dbe" May 27 02:48:44.463624 containerd[2001]: time="2025-05-27T02:48:44.463476763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-kn5fg,Uid:96dbf7ef-fdad-445b-a963-39e84fe9fd05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b700bbf3f22ad6ceb3552a78c7714948149b4fc32321763a8511365f862f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.464266 kubelet[3467]: E0527 02:48:44.464122 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b700bbf3f22ad6ceb3552a78c7714948149b4fc32321763a8511365f862f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.464266 kubelet[3467]: E0527 02:48:44.464195 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b700bbf3f22ad6ceb3552a78c7714948149b4fc32321763a8511365f862f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d4cb8cc-kn5fg" May 27 02:48:44.465053 kubelet[3467]: E0527 02:48:44.464366 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b700bbf3f22ad6ceb3552a78c7714948149b4fc32321763a8511365f862f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-95d4cb8cc-kn5fg" May 27 02:48:44.465053 kubelet[3467]: E0527 02:48:44.464782 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-95d4cb8cc-kn5fg_calico-apiserver(96dbf7ef-fdad-445b-a963-39e84fe9fd05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-95d4cb8cc-kn5fg_calico-apiserver(96dbf7ef-fdad-445b-a963-39e84fe9fd05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17b700bbf3f22ad6ceb3552a78c7714948149b4fc32321763a8511365f862f99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-95d4cb8cc-kn5fg" podUID="96dbf7ef-fdad-445b-a963-39e84fe9fd05" May 27 02:48:44.474092 containerd[2001]: time="2025-05-27T02:48:44.473930143Z" level=error msg="Failed to destroy network for sandbox \"af4b589d2bab968a2798137cd86a17c2ab30e106f675d840906632a36c893633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.476897 containerd[2001]: time="2025-05-27T02:48:44.476551735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx96z,Uid:5e891259-bbea-4c4e-9cf8-bdfb46083aeb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b589d2bab968a2798137cd86a17c2ab30e106f675d840906632a36c893633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.477724 kubelet[3467]: E0527 02:48:44.477666 3467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b589d2bab968a2798137cd86a17c2ab30e106f675d840906632a36c893633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:48:44.478015 kubelet[3467]: E0527 02:48:44.477908 3467 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b589d2bab968a2798137cd86a17c2ab30e106f675d840906632a36c893633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sx96z" May 27 02:48:44.478246 kubelet[3467]: E0527 02:48:44.477945 3467 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b589d2bab968a2798137cd86a17c2ab30e106f675d840906632a36c893633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sx96z" May 27 02:48:44.478558 kubelet[3467]: E0527 02:48:44.478452 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sx96z_calico-system(5e891259-bbea-4c4e-9cf8-bdfb46083aeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sx96z_calico-system(5e891259-bbea-4c4e-9cf8-bdfb46083aeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af4b589d2bab968a2798137cd86a17c2ab30e106f675d840906632a36c893633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sx96z" podUID="5e891259-bbea-4c4e-9cf8-bdfb46083aeb" May 27 02:48:45.179715 systemd[1]: run-netns-cni\x2d52620a26\x2df662\x2d2415\x2dcd46\x2df3c533d23501.mount: Deactivated successfully. May 27 02:48:45.180229 systemd[1]: run-netns-cni\x2d0fa5f419\x2dd7a5\x2d0b24\x2d3e03\x2df3917e275ef6.mount: Deactivated successfully. May 27 02:48:45.180484 systemd[1]: run-netns-cni\x2d904a0b78\x2d5d2c\x2d572a\x2d5c99\x2ded2da8696b36.mount: Deactivated successfully. May 27 02:48:45.180712 systemd[1]: run-netns-cni\x2dc3484022\x2dc5d3\x2da0a0\x2d5b95\x2d0cb68a917fc0.mount: Deactivated successfully. May 27 02:48:50.285586 kubelet[3467]: I0527 02:48:50.285197 3467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:48:51.207209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709378073.mount: Deactivated successfully. May 27 02:48:51.260054 containerd[2001]: time="2025-05-27T02:48:51.259896421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:51.261916 containerd[2001]: time="2025-05-27T02:48:51.261863329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 27 02:48:51.264348 containerd[2001]: time="2025-05-27T02:48:51.264275389Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:51.268531 containerd[2001]: time="2025-05-27T02:48:51.268446565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:51.269787 containerd[2001]: time="2025-05-27T02:48:51.269577409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 7.139943299s" May 27 02:48:51.269787 containerd[2001]: time="2025-05-27T02:48:51.269636821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 27 02:48:51.322395 containerd[2001]: time="2025-05-27T02:48:51.322341325Z" level=info msg="CreateContainer within sandbox \"9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 02:48:51.339768 containerd[2001]: time="2025-05-27T02:48:51.339451429Z" level=info msg="Container b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:51.366675 containerd[2001]: time="2025-05-27T02:48:51.366624049Z" level=info msg="CreateContainer within sandbox \"9e4575ff6bea45154933687c2deef77587883a5808b6a158b03cef4460678350\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\"" May 27 02:48:51.368797 containerd[2001]: time="2025-05-27T02:48:51.368630629Z" level=info msg="StartContainer for \"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\"" May 27 02:48:51.375108 containerd[2001]: time="2025-05-27T02:48:51.375048889Z" level=info msg="connecting to shim b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b" address="unix:///run/containerd/s/47039e623da86c20de7fb7c021ccb032cdcde9b07474aad0d13d7c28b4485ee9" protocol=ttrpc version=3 May 27 02:48:51.410172 systemd[1]: Started cri-containerd-b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b.scope - libcontainer container b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b. May 27 02:48:51.527082 containerd[2001]: time="2025-05-27T02:48:51.526927754Z" level=info msg="StartContainer for \"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\" returns successfully" May 27 02:48:51.687209 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 02:48:51.687358 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 02:48:52.005862 kubelet[3467]: I0527 02:48:52.005020 3467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-ca-bundle\") pod \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\" (UID: \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\") " May 27 02:48:52.005862 kubelet[3467]: I0527 02:48:52.005127 3467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-backend-key-pair\") pod \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\" (UID: \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\") " May 27 02:48:52.005862 kubelet[3467]: I0527 02:48:52.005176 3467 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwm48\" (UniqueName: \"kubernetes.io/projected/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-kube-api-access-wwm48\") pod \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\" (UID: \"9c211f25-af7a-4f14-a1ea-1d4a59437dbe\") " May 27 02:48:52.006508 kubelet[3467]: I0527 02:48:52.006164 3467 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9c211f25-af7a-4f14-a1ea-1d4a59437dbe" (UID: "9c211f25-af7a-4f14-a1ea-1d4a59437dbe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 02:48:52.015480 kubelet[3467]: I0527 02:48:52.015401 3467 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-kube-api-access-wwm48" (OuterVolumeSpecName: "kube-api-access-wwm48") pod "9c211f25-af7a-4f14-a1ea-1d4a59437dbe" (UID: "9c211f25-af7a-4f14-a1ea-1d4a59437dbe"). InnerVolumeSpecName "kube-api-access-wwm48". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:48:52.016163 kubelet[3467]: I0527 02:48:52.016070 3467 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9c211f25-af7a-4f14-a1ea-1d4a59437dbe" (UID: "9c211f25-af7a-4f14-a1ea-1d4a59437dbe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 02:48:52.105698 kubelet[3467]: I0527 02:48:52.105637 3467 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-ca-bundle\") on node \"ip-172-31-27-90\" DevicePath \"\"" May 27 02:48:52.105929 kubelet[3467]: I0527 02:48:52.105711 3467 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-whisker-backend-key-pair\") on node \"ip-172-31-27-90\" DevicePath \"\"" May 27 02:48:52.105929 kubelet[3467]: I0527 02:48:52.105741 3467 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wwm48\" (UniqueName: \"kubernetes.io/projected/9c211f25-af7a-4f14-a1ea-1d4a59437dbe-kube-api-access-wwm48\") on node \"ip-172-31-27-90\" DevicePath \"\"" May 27 02:48:52.193010 systemd[1]: Removed slice kubepods-besteffort-pod9c211f25_af7a_4f14_a1ea_1d4a59437dbe.slice - libcontainer container kubepods-besteffort-pod9c211f25_af7a_4f14_a1ea_1d4a59437dbe.slice. May 27 02:48:52.208956 systemd[1]: var-lib-kubelet-pods-9c211f25\x2daf7a\x2d4f14\x2da1ea\x2d1d4a59437dbe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwm48.mount: Deactivated successfully. May 27 02:48:52.209241 systemd[1]: var-lib-kubelet-pods-9c211f25\x2daf7a\x2d4f14\x2da1ea\x2d1d4a59437dbe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 02:48:52.258943 kubelet[3467]: I0527 02:48:52.258051 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hjfsz" podStartSLOduration=2.199138905 podStartE2EDuration="19.25799705s" podCreationTimestamp="2025-05-27 02:48:33 +0000 UTC" firstStartedPulling="2025-05-27 02:48:34.212200268 +0000 UTC m=+28.672738475" lastFinishedPulling="2025-05-27 02:48:51.271058413 +0000 UTC m=+45.731596620" observedRunningTime="2025-05-27 02:48:52.22447283 +0000 UTC m=+46.685011037" watchObservedRunningTime="2025-05-27 02:48:52.25799705 +0000 UTC m=+46.718535269" May 27 02:48:52.348801 systemd[1]: Created slice kubepods-besteffort-podcaa565e1_9a46_42db_9b86_2b08e49a62f0.slice - libcontainer container kubepods-besteffort-podcaa565e1_9a46_42db_9b86_2b08e49a62f0.slice. May 27 02:48:52.409194 kubelet[3467]: I0527 02:48:52.409115 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/caa565e1-9a46-42db-9b86-2b08e49a62f0-whisker-backend-key-pair\") pod \"whisker-7f8f586ffd-2th8f\" (UID: \"caa565e1-9a46-42db-9b86-2b08e49a62f0\") " pod="calico-system/whisker-7f8f586ffd-2th8f" May 27 02:48:52.409194 kubelet[3467]: I0527 02:48:52.409191 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caa565e1-9a46-42db-9b86-2b08e49a62f0-whisker-ca-bundle\") pod \"whisker-7f8f586ffd-2th8f\" (UID: \"caa565e1-9a46-42db-9b86-2b08e49a62f0\") " pod="calico-system/whisker-7f8f586ffd-2th8f" May 27 02:48:52.409530 kubelet[3467]: I0527 02:48:52.409259 3467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fck8c\" (UniqueName: \"kubernetes.io/projected/caa565e1-9a46-42db-9b86-2b08e49a62f0-kube-api-access-fck8c\") pod \"whisker-7f8f586ffd-2th8f\" (UID: \"caa565e1-9a46-42db-9b86-2b08e49a62f0\") " pod="calico-system/whisker-7f8f586ffd-2th8f" May 27 02:48:52.488546 containerd[2001]: time="2025-05-27T02:48:52.488473707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\" id:\"f50eb41e04f415fcd0edd8405ef806a1ef7ea2aa7988aa6c24191d3363cb0f01\" pid:4547 exit_status:1 exited_at:{seconds:1748314132 nanos:487572831}" May 27 02:48:52.655953 containerd[2001]: time="2025-05-27T02:48:52.655885936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8f586ffd-2th8f,Uid:caa565e1-9a46-42db-9b86-2b08e49a62f0,Namespace:calico-system,Attempt:0,}" May 27 02:48:52.917271 (udev-worker)[4516]: Network interface NamePolicy= disabled on kernel command line. May 27 02:48:52.922969 systemd-networkd[1890]: cali3f27212facb: Link UP May 27 02:48:52.926319 systemd-networkd[1890]: cali3f27212facb: Gained carrier May 27 02:48:52.955487 containerd[2001]: 2025-05-27 02:48:52.708 [INFO][4572] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 02:48:52.955487 containerd[2001]: 2025-05-27 02:48:52.781 [INFO][4572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0 whisker-7f8f586ffd- calico-system caa565e1-9a46-42db-9b86-2b08e49a62f0 933 0 2025-05-27 02:48:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f8f586ffd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-27-90 whisker-7f8f586ffd-2th8f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3f27212facb [] [] }} ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-" May 27 02:48:52.955487 containerd[2001]: 2025-05-27 02:48:52.782 [INFO][4572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" May 27 02:48:52.955487 containerd[2001]: 2025-05-27 02:48:52.832 [INFO][4583] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" HandleID="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Workload="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.832 [INFO][4583] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" HandleID="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Workload="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d76c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-90", "pod":"whisker-7f8f586ffd-2th8f", "timestamp":"2025-05-27 02:48:52.832627529 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.832 [INFO][4583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.833 [INFO][4583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.833 [INFO][4583] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.847 [INFO][4583] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" host="ip-172-31-27-90" May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.855 [INFO][4583] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.866 [INFO][4583] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.869 [INFO][4583] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.872 [INFO][4583] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:52.955981 containerd[2001]: 2025-05-27 02:48:52.873 [INFO][4583] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" host="ip-172-31-27-90" May 27 02:48:52.956900 containerd[2001]: 2025-05-27 02:48:52.875 [INFO][4583] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7 May 27 02:48:52.956900 containerd[2001]: 2025-05-27 02:48:52.881 [INFO][4583] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" host="ip-172-31-27-90" May 27 02:48:52.956900 containerd[2001]: 2025-05-27 02:48:52.892 [INFO][4583] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.1/26] block=192.168.47.0/26 handle="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" host="ip-172-31-27-90" May 27 02:48:52.956900 containerd[2001]: 2025-05-27 02:48:52.892 [INFO][4583] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.1/26] handle="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" host="ip-172-31-27-90" May 27 02:48:52.956900 containerd[2001]: 2025-05-27 02:48:52.892 [INFO][4583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:52.956900 containerd[2001]: 2025-05-27 02:48:52.892 [INFO][4583] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.1/26] IPv6=[] ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" HandleID="k8s-pod-network.de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Workload="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" May 27 02:48:52.957399 containerd[2001]: 2025-05-27 02:48:52.904 [INFO][4572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0", GenerateName:"whisker-7f8f586ffd-", Namespace:"calico-system", SelfLink:"", UID:"caa565e1-9a46-42db-9b86-2b08e49a62f0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f8f586ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"whisker-7f8f586ffd-2th8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3f27212facb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:52.957399 containerd[2001]: 2025-05-27 02:48:52.905 [INFO][4572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.1/32] ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" May 27 02:48:52.957862 containerd[2001]: 2025-05-27 02:48:52.905 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f27212facb ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" May 27 02:48:52.957862 containerd[2001]: 2025-05-27 02:48:52.927 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" May 27 02:48:52.958043 containerd[2001]: 2025-05-27 02:48:52.928 [INFO][4572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0", GenerateName:"whisker-7f8f586ffd-", Namespace:"calico-system", SelfLink:"", UID:"caa565e1-9a46-42db-9b86-2b08e49a62f0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f8f586ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7", Pod:"whisker-7f8f586ffd-2th8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3f27212facb", MAC:"7a:f0:4c:b5:a1:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:52.958234 containerd[2001]: 2025-05-27 02:48:52.948 [INFO][4572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" Namespace="calico-system" Pod="whisker-7f8f586ffd-2th8f" WorkloadEndpoint="ip--172--31--27--90-k8s-whisker--7f8f586ffd--2th8f-eth0" May 27 02:48:53.003088 containerd[2001]: time="2025-05-27T02:48:53.003000770Z" level=info msg="connecting to shim de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7" address="unix:///run/containerd/s/aaef627d56521d6c0f8b14d8144cbe87c08d3b063d222b55fe5965c83b9f93bb" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:53.046175 systemd[1]: Started cri-containerd-de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7.scope - libcontainer container de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7. May 27 02:48:53.129647 containerd[2001]: time="2025-05-27T02:48:53.129493490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f8f586ffd-2th8f,Uid:caa565e1-9a46-42db-9b86-2b08e49a62f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"de362d4a7ba784cba42011f1ba50ec2dc28231657cbcbaa73890e28f94addbd7\"" May 27 02:48:53.133796 containerd[2001]: time="2025-05-27T02:48:53.133714334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 02:48:53.313904 containerd[2001]: time="2025-05-27T02:48:53.313675527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\" id:\"feb3a69f77407c1f0b6c565c73b65745025f1319e39e33faaf4ccd97fb7e3cee\" pid:4654 exit_status:1 exited_at:{seconds:1748314133 nanos:313125255}" May 27 02:48:53.325343 containerd[2001]: time="2025-05-27T02:48:53.325233747Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:48:53.327854 containerd[2001]: time="2025-05-27T02:48:53.327679623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:48:53.328196 containerd[2001]: time="2025-05-27T02:48:53.327813939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 02:48:53.332300 kubelet[3467]: E0527 02:48:53.332174 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:48:53.332300 kubelet[3467]: E0527 02:48:53.332262 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:48:53.336312 kubelet[3467]: E0527 02:48:53.336205 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6721adc515ad48b18fccbca3f868b1ac,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:48:53.339604 containerd[2001]: time="2025-05-27T02:48:53.339258975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 02:48:53.535212 containerd[2001]: time="2025-05-27T02:48:53.535153480Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:48:53.539081 containerd[2001]: time="2025-05-27T02:48:53.538870960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:48:53.539081 containerd[2001]: time="2025-05-27T02:48:53.538875712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 02:48:53.539905 kubelet[3467]: E0527 02:48:53.539518 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:48:53.539905 kubelet[3467]: E0527 02:48:53.539586 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:48:53.540188 kubelet[3467]: E0527 02:48:53.539786 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:48:53.541718 kubelet[3467]: E0527 02:48:53.541490 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:48:53.821542 kubelet[3467]: I0527 02:48:53.821477 3467 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c211f25-af7a-4f14-a1ea-1d4a59437dbe" path="/var/lib/kubelet/pods/9c211f25-af7a-4f14-a1ea-1d4a59437dbe/volumes" May 27 02:48:54.178105 kubelet[3467]: E0527 02:48:54.177346 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:48:54.560426 systemd-networkd[1890]: vxlan.calico: Link UP May 27 02:48:54.560444 systemd-networkd[1890]: vxlan.calico: Gained carrier May 27 02:48:54.575246 systemd-networkd[1890]: cali3f27212facb: Gained IPv6LL May 27 02:48:54.604781 (udev-worker)[4519]: Network interface NamePolicy= disabled on kernel command line. May 27 02:48:55.816303 containerd[2001]: time="2025-05-27T02:48:55.816163772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szq2g,Uid:4f9fd9e1-0322-4cb7-986b-0344d90fe242,Namespace:kube-system,Attempt:0,}" May 27 02:48:56.055539 (udev-worker)[4832]: Network interface NamePolicy= disabled on kernel command line. May 27 02:48:56.069373 systemd-networkd[1890]: cali4b20f0e23af: Link UP May 27 02:48:56.074602 systemd-networkd[1890]: cali4b20f0e23af: Gained carrier May 27 02:48:56.146416 containerd[2001]: 2025-05-27 02:48:55.921 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0 coredns-674b8bbfcf- kube-system 4f9fd9e1-0322-4cb7-986b-0344d90fe242 859 0 2025-05-27 02:48:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-90 coredns-674b8bbfcf-szq2g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4b20f0e23af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-" May 27 02:48:56.146416 containerd[2001]: 2025-05-27 02:48:55.921 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" May 27 02:48:56.146416 containerd[2001]: 2025-05-27 02:48:55.967 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" HandleID="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Workload="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:55.968 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" HandleID="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Workload="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002310c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-90", "pod":"coredns-674b8bbfcf-szq2g", "timestamp":"2025-05-27 02:48:55.96759752 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:55.968 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:55.970 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:55.970 [INFO][4883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:55.994 [INFO][4883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" host="ip-172-31-27-90" May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:56.003 [INFO][4883] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:56.010 [INFO][4883] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:56.014 [INFO][4883] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:56.020 [INFO][4883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:56.146742 containerd[2001]: 2025-05-27 02:48:56.020 [INFO][4883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" host="ip-172-31-27-90" May 27 02:48:56.147284 containerd[2001]: 2025-05-27 02:48:56.022 [INFO][4883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9 May 27 02:48:56.147284 containerd[2001]: 2025-05-27 02:48:56.033 [INFO][4883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" host="ip-172-31-27-90" May 27 02:48:56.147284 containerd[2001]: 2025-05-27 02:48:56.045 [INFO][4883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.2/26] block=192.168.47.0/26 handle="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" host="ip-172-31-27-90" May 27 02:48:56.147284 containerd[2001]: 2025-05-27 02:48:56.046 [INFO][4883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.2/26] handle="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" host="ip-172-31-27-90" May 27 02:48:56.147284 containerd[2001]: 2025-05-27 02:48:56.046 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:56.147284 containerd[2001]: 2025-05-27 02:48:56.046 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.2/26] IPv6=[] ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" HandleID="k8s-pod-network.70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Workload="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" May 27 02:48:56.147575 containerd[2001]: 2025-05-27 02:48:56.051 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f9fd9e1-0322-4cb7-986b-0344d90fe242", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"coredns-674b8bbfcf-szq2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b20f0e23af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:56.147575 containerd[2001]: 2025-05-27 02:48:56.052 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.2/32] ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" May 27 02:48:56.147575 containerd[2001]: 2025-05-27 02:48:56.052 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b20f0e23af ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" May 27 02:48:56.147575 containerd[2001]: 2025-05-27 02:48:56.078 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" May 27 02:48:56.147575 containerd[2001]: 2025-05-27 02:48:56.080 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f9fd9e1-0322-4cb7-986b-0344d90fe242", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9", Pod:"coredns-674b8bbfcf-szq2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4b20f0e23af", MAC:"72:c1:43:a4:68:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:56.147575 containerd[2001]: 2025-05-27 02:48:56.139 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" Namespace="kube-system" Pod="coredns-674b8bbfcf-szq2g" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--szq2g-eth0" May 27 02:48:56.175649 systemd-networkd[1890]: vxlan.calico: Gained IPv6LL May 27 02:48:56.230485 containerd[2001]: time="2025-05-27T02:48:56.228626730Z" level=info msg="connecting to shim 70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9" address="unix:///run/containerd/s/1ac7735c2ebc27f627bfc4ff14122c854131af7042e1b7c518b05825cc8feec2" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:56.332167 systemd[1]: Started cri-containerd-70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9.scope - libcontainer container 70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9. May 27 02:48:56.429384 containerd[2001]: time="2025-05-27T02:48:56.429152899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-szq2g,Uid:4f9fd9e1-0322-4cb7-986b-0344d90fe242,Namespace:kube-system,Attempt:0,} returns sandbox id \"70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9\"" May 27 02:48:56.442953 containerd[2001]: time="2025-05-27T02:48:56.442870423Z" level=info msg="CreateContainer within sandbox \"70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:48:56.465971 containerd[2001]: time="2025-05-27T02:48:56.465897295Z" level=info msg="Container 8faeba00d0c34edbc1b84c1ca1fd9faafd744f2210841a2b58a69d6cd570c426: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:56.487955 containerd[2001]: time="2025-05-27T02:48:56.487558399Z" level=info msg="CreateContainer within sandbox \"70e50d570ebf93fa883b8ac0a6391c3f5c97c654f97c436d5d9c76debf762de9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8faeba00d0c34edbc1b84c1ca1fd9faafd744f2210841a2b58a69d6cd570c426\"" May 27 02:48:56.491978 containerd[2001]: time="2025-05-27T02:48:56.490173355Z" level=info msg="StartContainer for \"8faeba00d0c34edbc1b84c1ca1fd9faafd744f2210841a2b58a69d6cd570c426\"" May 27 02:48:56.495309 systemd[1]: Started sshd@9-172.31.27.90:22-139.178.68.195:49000.service - OpenSSH per-connection server daemon (139.178.68.195:49000). May 27 02:48:56.504442 containerd[2001]: time="2025-05-27T02:48:56.504363211Z" level=info msg="connecting to shim 8faeba00d0c34edbc1b84c1ca1fd9faafd744f2210841a2b58a69d6cd570c426" address="unix:///run/containerd/s/1ac7735c2ebc27f627bfc4ff14122c854131af7042e1b7c518b05825cc8feec2" protocol=ttrpc version=3 May 27 02:48:56.543701 systemd[1]: Started cri-containerd-8faeba00d0c34edbc1b84c1ca1fd9faafd744f2210841a2b58a69d6cd570c426.scope - libcontainer container 8faeba00d0c34edbc1b84c1ca1fd9faafd744f2210841a2b58a69d6cd570c426. May 27 02:48:56.625282 containerd[2001]: time="2025-05-27T02:48:56.625108232Z" level=info msg="StartContainer for \"8faeba00d0c34edbc1b84c1ca1fd9faafd744f2210841a2b58a69d6cd570c426\" returns successfully" May 27 02:48:56.726947 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 49000 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:48:56.731284 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:48:56.745223 systemd-logind[1977]: New session 10 of user core. May 27 02:48:56.755144 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 02:48:56.817680 containerd[2001]: time="2025-05-27T02:48:56.817335573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-lhfd7,Uid:55e730ec-46ee-429f-81a6-a70c10e9a76c,Namespace:calico-apiserver,Attempt:0,}" May 27 02:48:56.820744 containerd[2001]: time="2025-05-27T02:48:56.820472877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-4qxmb,Uid:be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c,Namespace:calico-system,Attempt:0,}" May 27 02:48:56.825313 containerd[2001]: time="2025-05-27T02:48:56.825071553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-kn5fg,Uid:96dbf7ef-fdad-445b-a963-39e84fe9fd05,Namespace:calico-apiserver,Attempt:0,}" May 27 02:48:57.282311 sshd[4988]: Connection closed by 139.178.68.195 port 49000 May 27 02:48:57.282778 sshd-session[4953]: pam_unix(sshd:session): session closed for user core May 27 02:48:57.297655 systemd[1]: sshd@9-172.31.27.90:22-139.178.68.195:49000.service: Deactivated successfully. May 27 02:48:57.312473 systemd[1]: session-10.scope: Deactivated successfully. May 27 02:48:57.316383 systemd-logind[1977]: Session 10 logged out. Waiting for processes to exit. May 27 02:48:57.324402 systemd-logind[1977]: Removed session 10. May 27 02:48:57.447562 systemd-networkd[1890]: calie747cfe6def: Link UP May 27 02:48:57.451231 systemd-networkd[1890]: calie747cfe6def: Gained carrier May 27 02:48:57.485520 kubelet[3467]: I0527 02:48:57.485093 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-szq2g" podStartSLOduration=47.485047016 podStartE2EDuration="47.485047016s" podCreationTimestamp="2025-05-27 02:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:48:57.280521007 +0000 UTC m=+51.741059226" watchObservedRunningTime="2025-05-27 02:48:57.485047016 +0000 UTC m=+51.945585247" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.138 [INFO][5005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0 calico-apiserver-95d4cb8cc- calico-apiserver 96dbf7ef-fdad-445b-a963-39e84fe9fd05 864 0 2025-05-27 02:48:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:95d4cb8cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-90 calico-apiserver-95d4cb8cc-kn5fg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie747cfe6def [] [] }} ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.140 [INFO][5005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.341 [INFO][5032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" HandleID="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Workload="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.341 [INFO][5032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" HandleID="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Workload="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032a520), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-90", "pod":"calico-apiserver-95d4cb8cc-kn5fg", "timestamp":"2025-05-27 02:48:57.341177767 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.341 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.341 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.341 [INFO][5032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.370 [INFO][5032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.386 [INFO][5032] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.400 [INFO][5032] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.404 [INFO][5032] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.409 [INFO][5032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.409 [INFO][5032] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.412 [INFO][5032] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9 May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.419 [INFO][5032] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.433 [INFO][5032] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.3/26] block=192.168.47.0/26 handle="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.433 [INFO][5032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.3/26] handle="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" host="ip-172-31-27-90" May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.434 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:57.490142 containerd[2001]: 2025-05-27 02:48:57.434 [INFO][5032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.3/26] IPv6=[] ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" HandleID="k8s-pod-network.8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Workload="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" May 27 02:48:57.493550 containerd[2001]: 2025-05-27 02:48:57.438 [INFO][5005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0", GenerateName:"calico-apiserver-95d4cb8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"96dbf7ef-fdad-445b-a963-39e84fe9fd05", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d4cb8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"calico-apiserver-95d4cb8cc-kn5fg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie747cfe6def", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:57.493550 containerd[2001]: 2025-05-27 02:48:57.438 [INFO][5005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.3/32] ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" May 27 02:48:57.493550 containerd[2001]: 2025-05-27 02:48:57.438 [INFO][5005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie747cfe6def ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" May 27 02:48:57.493550 containerd[2001]: 2025-05-27 02:48:57.453 [INFO][5005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" May 27 02:48:57.493550 containerd[2001]: 2025-05-27 02:48:57.455 [INFO][5005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0", GenerateName:"calico-apiserver-95d4cb8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"96dbf7ef-fdad-445b-a963-39e84fe9fd05", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d4cb8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9", Pod:"calico-apiserver-95d4cb8cc-kn5fg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie747cfe6def", MAC:"a6:79:6f:f6:5d:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:57.493550 containerd[2001]: 2025-05-27 02:48:57.481 [INFO][5005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-kn5fg" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--kn5fg-eth0" May 27 02:48:57.520213 systemd-networkd[1890]: cali4b20f0e23af: Gained IPv6LL May 27 02:48:57.611788 containerd[2001]: time="2025-05-27T02:48:57.610946924Z" level=info msg="connecting to shim 8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9" address="unix:///run/containerd/s/fda3da01f1bc736eb3fc2d9d91f857f3bb29c2897d1836db9097a2610047a874" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:57.623013 systemd-networkd[1890]: cali86582edeb65: Link UP May 27 02:48:57.625032 systemd-networkd[1890]: cali86582edeb65: Gained carrier May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.199 [INFO][4997] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0 goldmane-78d55f7ddc- calico-system be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c 861 0 2025-05-27 02:48:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-27-90 goldmane-78d55f7ddc-4qxmb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali86582edeb65 [] [] }} ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.200 [INFO][4997] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.356 [INFO][5041] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" HandleID="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Workload="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.358 [INFO][5041] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" HandleID="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Workload="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-90", "pod":"goldmane-78d55f7ddc-4qxmb", "timestamp":"2025-05-27 02:48:57.356682907 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.358 [INFO][5041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.434 [INFO][5041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.434 [INFO][5041] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.471 [INFO][5041] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.501 [INFO][5041] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.513 [INFO][5041] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.521 [INFO][5041] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.527 [INFO][5041] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.528 [INFO][5041] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.532 [INFO][5041] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0 May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.542 [INFO][5041] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.593 [INFO][5041] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.4/26] block=192.168.47.0/26 handle="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.593 [INFO][5041] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.4/26] handle="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" host="ip-172-31-27-90" May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.593 [INFO][5041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:57.716877 containerd[2001]: 2025-05-27 02:48:57.593 [INFO][5041] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.4/26] IPv6=[] ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" HandleID="k8s-pod-network.7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Workload="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" May 27 02:48:57.719753 containerd[2001]: 2025-05-27 02:48:57.610 [INFO][4997] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"goldmane-78d55f7ddc-4qxmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali86582edeb65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:57.719753 containerd[2001]: 2025-05-27 02:48:57.610 [INFO][4997] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.4/32] ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" May 27 02:48:57.719753 containerd[2001]: 2025-05-27 02:48:57.610 [INFO][4997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86582edeb65 ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" May 27 02:48:57.719753 containerd[2001]: 2025-05-27 02:48:57.634 [INFO][4997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" May 27 02:48:57.719753 containerd[2001]: 2025-05-27 02:48:57.638 [INFO][4997] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0", Pod:"goldmane-78d55f7ddc-4qxmb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali86582edeb65", MAC:"e6:74:87:6c:81:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:57.719753 containerd[2001]: 2025-05-27 02:48:57.701 [INFO][4997] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" Namespace="calico-system" Pod="goldmane-78d55f7ddc-4qxmb" WorkloadEndpoint="ip--172--31--27--90-k8s-goldmane--78d55f7ddc--4qxmb-eth0" May 27 02:48:57.735398 systemd[1]: Started cri-containerd-8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9.scope - libcontainer container 8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9. May 27 02:48:57.796219 containerd[2001]: time="2025-05-27T02:48:57.796090893Z" level=info msg="connecting to shim 7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0" address="unix:///run/containerd/s/8ec589a396ac8e118abe05f07c192a0401aad2177320e0f7028aa37f04988aee" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:57.908163 systemd[1]: Started cri-containerd-7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0.scope - libcontainer container 7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0. May 27 02:48:57.952957 systemd-networkd[1890]: cali88354c4afc6: Link UP May 27 02:48:57.956525 systemd-networkd[1890]: cali88354c4afc6: Gained carrier May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.235 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0 calico-apiserver-95d4cb8cc- calico-apiserver 55e730ec-46ee-429f-81a6-a70c10e9a76c 862 0 2025-05-27 02:48:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:95d4cb8cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-90 calico-apiserver-95d4cb8cc-lhfd7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali88354c4afc6 [] [] }} ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.238 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.391 [INFO][5046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" HandleID="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Workload="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.392 [INFO][5046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" HandleID="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Workload="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cfa60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-90", "pod":"calico-apiserver-95d4cb8cc-lhfd7", "timestamp":"2025-05-27 02:48:57.391823395 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.392 [INFO][5046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.594 [INFO][5046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.594 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.653 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.698 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.767 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.787 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.794 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.797 [INFO][5046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.810 [INFO][5046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3 May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.828 [INFO][5046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.925 [INFO][5046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.5/26] block=192.168.47.0/26 handle="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.925 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.5/26] handle="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" host="ip-172-31-27-90" May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.925 [INFO][5046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:58.020829 containerd[2001]: 2025-05-27 02:48:57.925 [INFO][5046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.5/26] IPv6=[] ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" HandleID="k8s-pod-network.995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Workload="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" May 27 02:48:58.023758 containerd[2001]: 2025-05-27 02:48:57.932 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0", GenerateName:"calico-apiserver-95d4cb8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"55e730ec-46ee-429f-81a6-a70c10e9a76c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d4cb8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"calico-apiserver-95d4cb8cc-lhfd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88354c4afc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:58.023758 containerd[2001]: 2025-05-27 02:48:57.933 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.5/32] ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" May 27 02:48:58.023758 containerd[2001]: 2025-05-27 02:48:57.933 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88354c4afc6 ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" May 27 02:48:58.023758 containerd[2001]: 2025-05-27 02:48:57.960 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" May 27 02:48:58.023758 containerd[2001]: 2025-05-27 02:48:57.961 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0", GenerateName:"calico-apiserver-95d4cb8cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"55e730ec-46ee-429f-81a6-a70c10e9a76c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"95d4cb8cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3", Pod:"calico-apiserver-95d4cb8cc-lhfd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88354c4afc6", MAC:"6a:b3:6f:e3:40:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:58.023758 containerd[2001]: 2025-05-27 02:48:58.015 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" Namespace="calico-apiserver" Pod="calico-apiserver-95d4cb8cc-lhfd7" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--apiserver--95d4cb8cc--lhfd7-eth0" May 27 02:48:58.099135 containerd[2001]: time="2025-05-27T02:48:58.099028243Z" level=info msg="connecting to shim 995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3" address="unix:///run/containerd/s/e9e398f33e9ea2eefcf0fb52ed538a6fdeac0557c84abac3c130e75ace0cbfb3" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:58.179860 systemd[1]: Started cri-containerd-995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3.scope - libcontainer container 995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3. May 27 02:48:58.278398 containerd[2001]: time="2025-05-27T02:48:58.278171672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-kn5fg,Uid:96dbf7ef-fdad-445b-a963-39e84fe9fd05,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9\"" May 27 02:48:58.286154 containerd[2001]: time="2025-05-27T02:48:58.286092680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 02:48:58.317488 containerd[2001]: time="2025-05-27T02:48:58.317372504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-4qxmb,Uid:be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d470dee1547eb7940fe2c5057a329d2002c6af8b065c13c9d25d433b2053ba0\"" May 27 02:48:58.430929 containerd[2001]: time="2025-05-27T02:48:58.430418589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-95d4cb8cc-lhfd7,Uid:55e730ec-46ee-429f-81a6-a70c10e9a76c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3\"" May 27 02:48:58.671186 systemd-networkd[1890]: calie747cfe6def: Gained IPv6LL May 27 02:48:58.815786 containerd[2001]: time="2025-05-27T02:48:58.815500474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcs7c,Uid:5fba4d43-bcf3-4cf3-ad99-96050afcbf37,Namespace:kube-system,Attempt:0,}" May 27 02:48:58.816221 containerd[2001]: time="2025-05-27T02:48:58.815500606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx96z,Uid:5e891259-bbea-4c4e-9cf8-bdfb46083aeb,Namespace:calico-system,Attempt:0,}" May 27 02:48:58.817605 containerd[2001]: time="2025-05-27T02:48:58.817506826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5979f88f5b-f6zrw,Uid:a4dbad33-e282-4ddd-bf2d-f118dac65c55,Namespace:calico-system,Attempt:0,}" May 27 02:48:59.056292 systemd-networkd[1890]: cali86582edeb65: Gained IPv6LL May 27 02:48:59.219125 systemd-networkd[1890]: cali0e1d84e5839: Link UP May 27 02:48:59.221182 systemd-networkd[1890]: cali0e1d84e5839: Gained carrier May 27 02:48:59.247169 systemd-networkd[1890]: cali88354c4afc6: Gained IPv6LL May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:58.961 [INFO][5223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0 csi-node-driver- calico-system 5e891259-bbea-4c4e-9cf8-bdfb46083aeb 730 0 2025-05-27 02:48:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-27-90 csi-node-driver-sx96z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0e1d84e5839 [] [] }} ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:58.961 [INFO][5223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.053 [INFO][5258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" HandleID="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Workload="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.063 [INFO][5258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" HandleID="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Workload="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003730a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-90", "pod":"csi-node-driver-sx96z", "timestamp":"2025-05-27 02:48:59.053878844 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.064 [INFO][5258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.064 [INFO][5258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.064 [INFO][5258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.107 [INFO][5258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.124 [INFO][5258] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.138 [INFO][5258] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.143 [INFO][5258] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.152 [INFO][5258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.154 [INFO][5258] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.159 [INFO][5258] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1 May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.172 [INFO][5258] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.191 [INFO][5258] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.6/26] block=192.168.47.0/26 handle="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.191 [INFO][5258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.6/26] handle="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" host="ip-172-31-27-90" May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.191 [INFO][5258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:59.257979 containerd[2001]: 2025-05-27 02:48:59.191 [INFO][5258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.6/26] IPv6=[] ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" HandleID="k8s-pod-network.84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Workload="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" May 27 02:48:59.261058 containerd[2001]: 2025-05-27 02:48:59.201 [INFO][5223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e891259-bbea-4c4e-9cf8-bdfb46083aeb", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"csi-node-driver-sx96z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e1d84e5839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:59.261058 containerd[2001]: 2025-05-27 02:48:59.206 [INFO][5223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.6/32] ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" May 27 02:48:59.261058 containerd[2001]: 2025-05-27 02:48:59.207 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e1d84e5839 ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" May 27 02:48:59.261058 containerd[2001]: 2025-05-27 02:48:59.222 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" May 27 02:48:59.261058 containerd[2001]: 2025-05-27 02:48:59.223 [INFO][5223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e891259-bbea-4c4e-9cf8-bdfb46083aeb", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1", Pod:"csi-node-driver-sx96z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e1d84e5839", MAC:"6e:0f:2e:4e:91:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:59.261058 containerd[2001]: 2025-05-27 02:48:59.252 [INFO][5223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" Namespace="calico-system" Pod="csi-node-driver-sx96z" WorkloadEndpoint="ip--172--31--27--90-k8s-csi--node--driver--sx96z-eth0" May 27 02:48:59.349001 containerd[2001]: time="2025-05-27T02:48:59.348917085Z" level=info msg="connecting to shim 84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1" address="unix:///run/containerd/s/aacb8463505f880757018a3f35d19253061847b8ded5bb08aeb8d3e973ab62b3" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:59.391006 systemd-networkd[1890]: cali8c20b907150: Link UP May 27 02:48:59.400156 systemd-networkd[1890]: cali8c20b907150: Gained carrier May 27 02:48:59.453209 systemd[1]: Started cri-containerd-84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1.scope - libcontainer container 84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1. May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.004 [INFO][5228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0 coredns-674b8bbfcf- kube-system 5fba4d43-bcf3-4cf3-ad99-96050afcbf37 857 0 2025-05-27 02:48:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-90 coredns-674b8bbfcf-gcs7c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c20b907150 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.006 [INFO][5228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.130 [INFO][5267] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" HandleID="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Workload="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.132 [INFO][5267] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" HandleID="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Workload="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332140), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-90", "pod":"coredns-674b8bbfcf-gcs7c", "timestamp":"2025-05-27 02:48:59.130655324 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.132 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.191 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.192 [INFO][5267] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.260 [INFO][5267] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.277 [INFO][5267] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.292 [INFO][5267] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.297 [INFO][5267] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.305 [INFO][5267] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.305 [INFO][5267] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.316 [INFO][5267] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372 May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.337 [INFO][5267] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.359 [INFO][5267] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.7/26] block=192.168.47.0/26 handle="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.360 [INFO][5267] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.7/26] handle="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" host="ip-172-31-27-90" May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.360 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:59.462311 containerd[2001]: 2025-05-27 02:48:59.360 [INFO][5267] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.7/26] IPv6=[] ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" HandleID="k8s-pod-network.b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Workload="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" May 27 02:48:59.466088 containerd[2001]: 2025-05-27 02:48:59.369 [INFO][5228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5fba4d43-bcf3-4cf3-ad99-96050afcbf37", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"coredns-674b8bbfcf-gcs7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c20b907150", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:59.466088 containerd[2001]: 2025-05-27 02:48:59.371 [INFO][5228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.7/32] ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" May 27 02:48:59.466088 containerd[2001]: 2025-05-27 02:48:59.371 [INFO][5228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c20b907150 ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" May 27 02:48:59.466088 containerd[2001]: 2025-05-27 02:48:59.398 [INFO][5228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" May 27 02:48:59.466088 containerd[2001]: 2025-05-27 02:48:59.403 [INFO][5228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5fba4d43-bcf3-4cf3-ad99-96050afcbf37", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372", Pod:"coredns-674b8bbfcf-gcs7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c20b907150", MAC:"c6:85:f2:b8:9f:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:59.466088 containerd[2001]: 2025-05-27 02:48:59.446 [INFO][5228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" Namespace="kube-system" Pod="coredns-674b8bbfcf-gcs7c" WorkloadEndpoint="ip--172--31--27--90-k8s-coredns--674b8bbfcf--gcs7c-eth0" May 27 02:48:59.547009 containerd[2001]: time="2025-05-27T02:48:59.546177154Z" level=info msg="connecting to shim b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372" address="unix:///run/containerd/s/b6dfb9df63982d18170560f7ee41396e3cfb6c8b2b5c4eee40e56641be317b47" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:59.573204 systemd-networkd[1890]: calie4c58fc3ad5: Link UP May 27 02:48:59.579443 systemd-networkd[1890]: calie4c58fc3ad5: Gained carrier May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.050 [INFO][5243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0 calico-kube-controllers-5979f88f5b- calico-system a4dbad33-e282-4ddd-bf2d-f118dac65c55 860 0 2025-05-27 02:48:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5979f88f5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-90 calico-kube-controllers-5979f88f5b-f6zrw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie4c58fc3ad5 [] [] }} ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.051 [INFO][5243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.197 [INFO][5275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" HandleID="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Workload="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.198 [INFO][5275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" HandleID="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Workload="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000dca70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-90", "pod":"calico-kube-controllers-5979f88f5b-f6zrw", "timestamp":"2025-05-27 02:48:59.197795864 +0000 UTC"}, Hostname:"ip-172-31-27-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.198 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.360 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.361 [INFO][5275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-90' May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.420 [INFO][5275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.444 [INFO][5275] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.474 [INFO][5275] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.491 [INFO][5275] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.500 [INFO][5275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.500 [INFO][5275] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.504 [INFO][5275] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82 May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.513 [INFO][5275] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.543 [INFO][5275] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.8/26] block=192.168.47.0/26 handle="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.544 [INFO][5275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.8/26] handle="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" host="ip-172-31-27-90" May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.544 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:48:59.659462 containerd[2001]: 2025-05-27 02:48:59.544 [INFO][5275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.8/26] IPv6=[] ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" HandleID="k8s-pod-network.b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Workload="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" May 27 02:48:59.663355 containerd[2001]: 2025-05-27 02:48:59.562 [INFO][5243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0", GenerateName:"calico-kube-controllers-5979f88f5b-", Namespace:"calico-system", SelfLink:"", UID:"a4dbad33-e282-4ddd-bf2d-f118dac65c55", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5979f88f5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"", Pod:"calico-kube-controllers-5979f88f5b-f6zrw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4c58fc3ad5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:59.663355 containerd[2001]: 2025-05-27 02:48:59.562 [INFO][5243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.8/32] ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" May 27 02:48:59.663355 containerd[2001]: 2025-05-27 02:48:59.562 [INFO][5243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4c58fc3ad5 ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" May 27 02:48:59.663355 containerd[2001]: 2025-05-27 02:48:59.587 [INFO][5243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" May 27 02:48:59.663355 containerd[2001]: 2025-05-27 02:48:59.596 [INFO][5243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0", GenerateName:"calico-kube-controllers-5979f88f5b-", Namespace:"calico-system", SelfLink:"", UID:"a4dbad33-e282-4ddd-bf2d-f118dac65c55", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5979f88f5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-90", ContainerID:"b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82", Pod:"calico-kube-controllers-5979f88f5b-f6zrw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4c58fc3ad5", MAC:"3a:b4:6b:c7:5c:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:48:59.663355 containerd[2001]: 2025-05-27 02:48:59.640 [INFO][5243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" Namespace="calico-system" Pod="calico-kube-controllers-5979f88f5b-f6zrw" WorkloadEndpoint="ip--172--31--27--90-k8s-calico--kube--controllers--5979f88f5b--f6zrw-eth0" May 27 02:48:59.668023 containerd[2001]: time="2025-05-27T02:48:59.667739867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sx96z,Uid:5e891259-bbea-4c4e-9cf8-bdfb46083aeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1\"" May 27 02:48:59.724514 systemd[1]: Started cri-containerd-b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372.scope - libcontainer container b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372. May 27 02:48:59.798157 containerd[2001]: time="2025-05-27T02:48:59.797558903Z" level=info msg="connecting to shim b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82" address="unix:///run/containerd/s/8203b574862a153a823fe8195502401a0bb3aa57fe86fdb59aff73bf6b9cc55d" namespace=k8s.io protocol=ttrpc version=3 May 27 02:48:59.905463 systemd[1]: Started cri-containerd-b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82.scope - libcontainer container b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82. May 27 02:48:59.991284 containerd[2001]: time="2025-05-27T02:48:59.990661200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gcs7c,Uid:5fba4d43-bcf3-4cf3-ad99-96050afcbf37,Namespace:kube-system,Attempt:0,} returns sandbox id \"b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372\"" May 27 02:49:00.002897 containerd[2001]: time="2025-05-27T02:49:00.002659556Z" level=info msg="CreateContainer within sandbox \"b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:49:00.033237 containerd[2001]: time="2025-05-27T02:49:00.033147668Z" level=info msg="Container 6d983c7dc530131e95b22c27900ad1f4c360e776e900209d9007146c49c73be1: CDI devices from CRI Config.CDIDevices: []" May 27 02:49:00.066000 containerd[2001]: time="2025-05-27T02:49:00.064495425Z" level=info msg="CreateContainer within sandbox \"b61ca48fcfb498fce4af9420bce157c30fad76dfa91245ad6a5ac09fca4fa372\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d983c7dc530131e95b22c27900ad1f4c360e776e900209d9007146c49c73be1\"" May 27 02:49:00.067861 containerd[2001]: time="2025-05-27T02:49:00.067609821Z" level=info msg="StartContainer for \"6d983c7dc530131e95b22c27900ad1f4c360e776e900209d9007146c49c73be1\"" May 27 02:49:00.071700 containerd[2001]: time="2025-05-27T02:49:00.071148333Z" level=info msg="connecting to shim 6d983c7dc530131e95b22c27900ad1f4c360e776e900209d9007146c49c73be1" address="unix:///run/containerd/s/b6dfb9df63982d18170560f7ee41396e3cfb6c8b2b5c4eee40e56641be317b47" protocol=ttrpc version=3 May 27 02:49:00.184646 systemd[1]: Started cri-containerd-6d983c7dc530131e95b22c27900ad1f4c360e776e900209d9007146c49c73be1.scope - libcontainer container 6d983c7dc530131e95b22c27900ad1f4c360e776e900209d9007146c49c73be1. May 27 02:49:00.395860 containerd[2001]: time="2025-05-27T02:49:00.394730626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5979f88f5b-f6zrw,Uid:a4dbad33-e282-4ddd-bf2d-f118dac65c55,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82\"" May 27 02:49:00.400101 containerd[2001]: time="2025-05-27T02:49:00.398309530Z" level=info msg="StartContainer for \"6d983c7dc530131e95b22c27900ad1f4c360e776e900209d9007146c49c73be1\" returns successfully" May 27 02:49:00.847226 systemd-networkd[1890]: cali0e1d84e5839: Gained IPv6LL May 27 02:49:01.039188 systemd-networkd[1890]: cali8c20b907150: Gained IPv6LL May 27 02:49:01.419722 kubelet[3467]: I0527 02:49:01.419621 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gcs7c" podStartSLOduration=51.419597291 podStartE2EDuration="51.419597291s" podCreationTimestamp="2025-05-27 02:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:49:01.375405623 +0000 UTC m=+55.835943830" watchObservedRunningTime="2025-05-27 02:49:01.419597291 +0000 UTC m=+55.880135498" May 27 02:49:01.488096 systemd-networkd[1890]: calie4c58fc3ad5: Gained IPv6LL May 27 02:49:01.914431 containerd[2001]: time="2025-05-27T02:49:01.914280962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:01.917567 containerd[2001]: time="2025-05-27T02:49:01.917303246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 27 02:49:01.919963 containerd[2001]: time="2025-05-27T02:49:01.919791638Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:01.927701 containerd[2001]: time="2025-05-27T02:49:01.927198662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:01.929533 containerd[2001]: time="2025-05-27T02:49:01.928984994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 3.642826266s" May 27 02:49:01.929533 containerd[2001]: time="2025-05-27T02:49:01.929040806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 02:49:01.933022 containerd[2001]: time="2025-05-27T02:49:01.932678978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 02:49:01.949534 containerd[2001]: time="2025-05-27T02:49:01.947876894Z" level=info msg="CreateContainer within sandbox \"8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 02:49:01.994534 containerd[2001]: time="2025-05-27T02:49:01.994347086Z" level=info msg="Container bb57a7726fa872132c06471bc40a673ba68e6406ccee83d4914adf3bd5223c4a: CDI devices from CRI Config.CDIDevices: []" May 27 02:49:01.996371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804988526.mount: Deactivated successfully. May 27 02:49:02.018610 containerd[2001]: time="2025-05-27T02:49:02.018500302Z" level=info msg="CreateContainer within sandbox \"8e9794cf85f9441cfe9d9fa8d4b8a0bbd7de8026b94fb62ab54747a5d97c4da9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bb57a7726fa872132c06471bc40a673ba68e6406ccee83d4914adf3bd5223c4a\"" May 27 02:49:02.020300 containerd[2001]: time="2025-05-27T02:49:02.019468282Z" level=info msg="StartContainer for \"bb57a7726fa872132c06471bc40a673ba68e6406ccee83d4914adf3bd5223c4a\"" May 27 02:49:02.024901 containerd[2001]: time="2025-05-27T02:49:02.024755998Z" level=info msg="connecting to shim bb57a7726fa872132c06471bc40a673ba68e6406ccee83d4914adf3bd5223c4a" address="unix:///run/containerd/s/fda3da01f1bc736eb3fc2d9d91f857f3bb29c2897d1836db9097a2610047a874" protocol=ttrpc version=3 May 27 02:49:02.121243 systemd[1]: Started cri-containerd-bb57a7726fa872132c06471bc40a673ba68e6406ccee83d4914adf3bd5223c4a.scope - libcontainer container bb57a7726fa872132c06471bc40a673ba68e6406ccee83d4914adf3bd5223c4a. May 27 02:49:02.154884 containerd[2001]: time="2025-05-27T02:49:02.154643951Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:49:02.157564 containerd[2001]: time="2025-05-27T02:49:02.157486847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 02:49:02.157726 containerd[2001]: time="2025-05-27T02:49:02.157574687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:49:02.159258 kubelet[3467]: E0527 02:49:02.159144 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:49:02.159258 kubelet[3467]: E0527 02:49:02.159253 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:49:02.159658 kubelet[3467]: E0527 02:49:02.159539 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdldh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-4qxmb_calico-system(be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:49:02.161301 containerd[2001]: time="2025-05-27T02:49:02.161179679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 02:49:02.161602 kubelet[3467]: E0527 02:49:02.161503 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:49:02.250112 containerd[2001]: time="2025-05-27T02:49:02.249058620Z" level=info msg="StartContainer for \"bb57a7726fa872132c06471bc40a673ba68e6406ccee83d4914adf3bd5223c4a\" returns successfully" May 27 02:49:02.326928 systemd[1]: Started sshd@10-172.31.27.90:22-139.178.68.195:49016.service - OpenSSH per-connection server daemon (139.178.68.195:49016). May 27 02:49:02.374902 kubelet[3467]: E0527 02:49:02.374296 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:49:02.436394 kubelet[3467]: I0527 02:49:02.436305 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-95d4cb8cc-kn5fg" podStartSLOduration=33.787592326 podStartE2EDuration="37.436281444s" podCreationTimestamp="2025-05-27 02:48:25 +0000 UTC" firstStartedPulling="2025-05-27 02:48:58.283751336 +0000 UTC m=+52.744289543" lastFinishedPulling="2025-05-27 02:49:01.932440358 +0000 UTC m=+56.392978661" observedRunningTime="2025-05-27 02:49:02.409783068 +0000 UTC m=+56.870321287" watchObservedRunningTime="2025-05-27 02:49:02.436281444 +0000 UTC m=+56.896819651" May 27 02:49:02.516784 containerd[2001]: time="2025-05-27T02:49:02.516640069Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:02.518810 containerd[2001]: time="2025-05-27T02:49:02.518722561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 02:49:02.524952 containerd[2001]: time="2025-05-27T02:49:02.524785273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 363.422258ms" May 27 02:49:02.525541 containerd[2001]: time="2025-05-27T02:49:02.524906629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 02:49:02.528147 containerd[2001]: time="2025-05-27T02:49:02.527519161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 02:49:02.535097 containerd[2001]: time="2025-05-27T02:49:02.535035481Z" level=info msg="CreateContainer within sandbox \"995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 02:49:02.558690 containerd[2001]: time="2025-05-27T02:49:02.558623701Z" level=info msg="Container ab2a64b0720b630c277ea2d1825994d04a42daca18798b66471ebde1a3ced62b: CDI devices from CRI Config.CDIDevices: []" May 27 02:49:02.570066 sshd[5533]: Accepted publickey for core from 139.178.68.195 port 49016 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:02.581159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700539889.mount: Deactivated successfully. May 27 02:49:02.589962 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:02.607270 systemd-logind[1977]: New session 11 of user core. May 27 02:49:02.609154 containerd[2001]: time="2025-05-27T02:49:02.609030085Z" level=info msg="CreateContainer within sandbox \"995aa3f7252ba631172f2f1d97512bdc0ffb04e84240ae05ad1f074a7e38bdd3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab2a64b0720b630c277ea2d1825994d04a42daca18798b66471ebde1a3ced62b\"" May 27 02:49:02.612751 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 02:49:02.613901 containerd[2001]: time="2025-05-27T02:49:02.613826797Z" level=info msg="StartContainer for \"ab2a64b0720b630c277ea2d1825994d04a42daca18798b66471ebde1a3ced62b\"" May 27 02:49:02.627000 containerd[2001]: time="2025-05-27T02:49:02.625936657Z" level=info msg="connecting to shim ab2a64b0720b630c277ea2d1825994d04a42daca18798b66471ebde1a3ced62b" address="unix:///run/containerd/s/e9e398f33e9ea2eefcf0fb52ed538a6fdeac0557c84abac3c130e75ace0cbfb3" protocol=ttrpc version=3 May 27 02:49:02.666434 systemd[1]: Started cri-containerd-ab2a64b0720b630c277ea2d1825994d04a42daca18798b66471ebde1a3ced62b.scope - libcontainer container ab2a64b0720b630c277ea2d1825994d04a42daca18798b66471ebde1a3ced62b. May 27 02:49:02.934053 containerd[2001]: time="2025-05-27T02:49:02.933780975Z" level=info msg="StartContainer for \"ab2a64b0720b630c277ea2d1825994d04a42daca18798b66471ebde1a3ced62b\" returns successfully" May 27 02:49:02.939209 sshd[5539]: Connection closed by 139.178.68.195 port 49016 May 27 02:49:02.939772 sshd-session[5533]: pam_unix(sshd:session): session closed for user core May 27 02:49:02.951372 systemd-logind[1977]: Session 11 logged out. Waiting for processes to exit. May 27 02:49:02.952687 systemd[1]: sshd@10-172.31.27.90:22-139.178.68.195:49016.service: Deactivated successfully. May 27 02:49:02.962510 systemd[1]: session-11.scope: Deactivated successfully. May 27 02:49:02.968161 systemd-logind[1977]: Removed session 11. May 27 02:49:03.404573 kubelet[3467]: I0527 02:49:03.404029 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-95d4cb8cc-lhfd7" podStartSLOduration=34.310684077 podStartE2EDuration="38.403940533s" podCreationTimestamp="2025-05-27 02:48:25 +0000 UTC" firstStartedPulling="2025-05-27 02:48:58.433607781 +0000 UTC m=+52.894145988" lastFinishedPulling="2025-05-27 02:49:02.526864237 +0000 UTC m=+56.987402444" observedRunningTime="2025-05-27 02:49:03.403769317 +0000 UTC m=+57.864307584" watchObservedRunningTime="2025-05-27 02:49:03.403940533 +0000 UTC m=+57.864478908" May 27 02:49:03.558489 ntpd[1970]: Listen normally on 8 vxlan.calico 192.168.47.0:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 8 vxlan.calico 192.168.47.0:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 9 cali3f27212facb [fe80::ecee:eeff:feee:eeee%4]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 10 vxlan.calico [fe80::64fa:f9ff:fe2b:83f1%5]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 11 cali4b20f0e23af [fe80::ecee:eeff:feee:eeee%8]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 12 calie747cfe6def [fe80::ecee:eeff:feee:eeee%9]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 13 cali86582edeb65 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 14 cali88354c4afc6 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 15 cali0e1d84e5839 [fe80::ecee:eeff:feee:eeee%12]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 16 cali8c20b907150 [fe80::ecee:eeff:feee:eeee%13]:123 May 27 02:49:03.559705 ntpd[1970]: 27 May 02:49:03 ntpd[1970]: Listen normally on 17 calie4c58fc3ad5 [fe80::ecee:eeff:feee:eeee%14]:123 May 27 02:49:03.558611 ntpd[1970]: Listen normally on 9 cali3f27212facb [fe80::ecee:eeff:feee:eeee%4]:123 May 27 02:49:03.558697 ntpd[1970]: Listen normally on 10 vxlan.calico [fe80::64fa:f9ff:fe2b:83f1%5]:123 May 27 02:49:03.558762 ntpd[1970]: Listen normally on 11 cali4b20f0e23af [fe80::ecee:eeff:feee:eeee%8]:123 May 27 02:49:03.558827 ntpd[1970]: Listen normally on 12 calie747cfe6def [fe80::ecee:eeff:feee:eeee%9]:123 May 27 02:49:03.558942 ntpd[1970]: Listen normally on 13 cali86582edeb65 [fe80::ecee:eeff:feee:eeee%10]:123 May 27 02:49:03.559006 ntpd[1970]: Listen normally on 14 cali88354c4afc6 [fe80::ecee:eeff:feee:eeee%11]:123 May 27 02:49:03.559073 ntpd[1970]: Listen normally on 15 cali0e1d84e5839 [fe80::ecee:eeff:feee:eeee%12]:123 May 27 02:49:03.559136 ntpd[1970]: Listen normally on 16 cali8c20b907150 [fe80::ecee:eeff:feee:eeee%13]:123 May 27 02:49:03.559198 ntpd[1970]: Listen normally on 17 calie4c58fc3ad5 [fe80::ecee:eeff:feee:eeee%14]:123 May 27 02:49:04.159272 containerd[2001]: time="2025-05-27T02:49:04.159203869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:04.162455 containerd[2001]: time="2025-05-27T02:49:04.162354577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 27 02:49:04.164211 containerd[2001]: time="2025-05-27T02:49:04.164017885Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:04.173618 containerd[2001]: time="2025-05-27T02:49:04.173535565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:04.176063 containerd[2001]: time="2025-05-27T02:49:04.175878109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.648302008s" May 27 02:49:04.176063 containerd[2001]: time="2025-05-27T02:49:04.175934269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 27 02:49:04.180929 containerd[2001]: time="2025-05-27T02:49:04.179655493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 02:49:04.187608 containerd[2001]: time="2025-05-27T02:49:04.187548433Z" level=info msg="CreateContainer within sandbox \"84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 02:49:04.217216 containerd[2001]: time="2025-05-27T02:49:04.217083793Z" level=info msg="Container d1c255ee420f8b56faeed07a0675bf270e3c3bbb033e855490e61cc3c418be1c: CDI devices from CRI Config.CDIDevices: []" May 27 02:49:04.242727 containerd[2001]: time="2025-05-27T02:49:04.242668933Z" level=info msg="CreateContainer within sandbox \"84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d1c255ee420f8b56faeed07a0675bf270e3c3bbb033e855490e61cc3c418be1c\"" May 27 02:49:04.245520 containerd[2001]: time="2025-05-27T02:49:04.244153189Z" level=info msg="StartContainer for \"d1c255ee420f8b56faeed07a0675bf270e3c3bbb033e855490e61cc3c418be1c\"" May 27 02:49:04.251275 containerd[2001]: time="2025-05-27T02:49:04.250959817Z" level=info msg="connecting to shim d1c255ee420f8b56faeed07a0675bf270e3c3bbb033e855490e61cc3c418be1c" address="unix:///run/containerd/s/aacb8463505f880757018a3f35d19253061847b8ded5bb08aeb8d3e973ab62b3" protocol=ttrpc version=3 May 27 02:49:04.340564 systemd[1]: Started cri-containerd-d1c255ee420f8b56faeed07a0675bf270e3c3bbb033e855490e61cc3c418be1c.scope - libcontainer container d1c255ee420f8b56faeed07a0675bf270e3c3bbb033e855490e61cc3c418be1c. May 27 02:49:04.397619 kubelet[3467]: I0527 02:49:04.397027 3467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:49:04.549691 containerd[2001]: time="2025-05-27T02:49:04.549325407Z" level=info msg="StartContainer for \"d1c255ee420f8b56faeed07a0675bf270e3c3bbb033e855490e61cc3c418be1c\" returns successfully" May 27 02:49:07.983125 systemd[1]: Started sshd@11-172.31.27.90:22-139.178.68.195:51406.service - OpenSSH per-connection server daemon (139.178.68.195:51406). May 27 02:49:08.185778 sshd[5639]: Accepted publickey for core from 139.178.68.195 port 51406 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:08.189040 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:08.198944 systemd-logind[1977]: New session 12 of user core. May 27 02:49:08.206110 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 02:49:08.479362 sshd[5641]: Connection closed by 139.178.68.195 port 51406 May 27 02:49:08.479822 sshd-session[5639]: pam_unix(sshd:session): session closed for user core May 27 02:49:08.490019 systemd[1]: sshd@11-172.31.27.90:22-139.178.68.195:51406.service: Deactivated successfully. May 27 02:49:08.495060 systemd[1]: session-12.scope: Deactivated successfully. May 27 02:49:08.497862 systemd-logind[1977]: Session 12 logged out. Waiting for processes to exit. May 27 02:49:08.501479 systemd-logind[1977]: Removed session 12. May 27 02:49:08.519575 systemd[1]: Started sshd@12-172.31.27.90:22-139.178.68.195:51410.service - OpenSSH per-connection server daemon (139.178.68.195:51410). May 27 02:49:08.723692 sshd[5654]: Accepted publickey for core from 139.178.68.195 port 51410 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:08.726387 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:08.735167 systemd-logind[1977]: New session 13 of user core. May 27 02:49:08.742102 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 02:49:09.077237 sshd[5656]: Connection closed by 139.178.68.195 port 51410 May 27 02:49:09.078259 sshd-session[5654]: pam_unix(sshd:session): session closed for user core May 27 02:49:09.091608 systemd[1]: sshd@12-172.31.27.90:22-139.178.68.195:51410.service: Deactivated successfully. May 27 02:49:09.102120 systemd[1]: session-13.scope: Deactivated successfully. May 27 02:49:09.106959 systemd-logind[1977]: Session 13 logged out. Waiting for processes to exit. May 27 02:49:09.131588 systemd[1]: Started sshd@13-172.31.27.90:22-139.178.68.195:51414.service - OpenSSH per-connection server daemon (139.178.68.195:51414). May 27 02:49:09.135043 systemd-logind[1977]: Removed session 13. May 27 02:49:09.326606 sshd[5666]: Accepted publickey for core from 139.178.68.195 port 51414 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:09.329129 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:09.338935 systemd-logind[1977]: New session 14 of user core. May 27 02:49:09.348120 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 02:49:09.694239 sshd[5668]: Connection closed by 139.178.68.195 port 51414 May 27 02:49:09.695223 sshd-session[5666]: pam_unix(sshd:session): session closed for user core May 27 02:49:09.711408 systemd-logind[1977]: Session 14 logged out. Waiting for processes to exit. May 27 02:49:09.712495 systemd[1]: sshd@13-172.31.27.90:22-139.178.68.195:51414.service: Deactivated successfully. May 27 02:49:09.720429 systemd[1]: session-14.scope: Deactivated successfully. May 27 02:49:09.731297 systemd-logind[1977]: Removed session 14. May 27 02:49:11.628072 containerd[2001]: time="2025-05-27T02:49:11.628004326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:11.631455 containerd[2001]: time="2025-05-27T02:49:11.631344922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 27 02:49:11.634533 containerd[2001]: time="2025-05-27T02:49:11.634453318Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:11.641690 containerd[2001]: time="2025-05-27T02:49:11.641620822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:11.644311 containerd[2001]: time="2025-05-27T02:49:11.644171986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 7.464434689s" May 27 02:49:11.644614 containerd[2001]: time="2025-05-27T02:49:11.644404858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 27 02:49:11.647722 containerd[2001]: time="2025-05-27T02:49:11.647512438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 02:49:11.691410 containerd[2001]: time="2025-05-27T02:49:11.689186626Z" level=info msg="CreateContainer within sandbox \"b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 02:49:11.718255 containerd[2001]: time="2025-05-27T02:49:11.718103195Z" level=info msg="Container c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3: CDI devices from CRI Config.CDIDevices: []" May 27 02:49:11.748199 containerd[2001]: time="2025-05-27T02:49:11.748115003Z" level=info msg="CreateContainer within sandbox \"b7a1917653e0487ff2cffc8cb5ffb7e6732092e600ed9685b25757bbbff37a82\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3\"" May 27 02:49:11.751689 containerd[2001]: time="2025-05-27T02:49:11.751108283Z" level=info msg="StartContainer for \"c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3\"" May 27 02:49:11.755891 containerd[2001]: time="2025-05-27T02:49:11.755626403Z" level=info msg="connecting to shim c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3" address="unix:///run/containerd/s/8203b574862a153a823fe8195502401a0bb3aa57fe86fdb59aff73bf6b9cc55d" protocol=ttrpc version=3 May 27 02:49:11.834179 systemd[1]: Started cri-containerd-c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3.scope - libcontainer container c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3. May 27 02:49:12.038539 containerd[2001]: time="2025-05-27T02:49:12.038469932Z" level=info msg="StartContainer for \"c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3\" returns successfully" May 27 02:49:12.531312 containerd[2001]: time="2025-05-27T02:49:12.531195767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3\" id:\"9eee3738259fe75d3a573a5c3f823b9809349d7a49c133d76119e2c64c95401b\" pid:5747 exited_at:{seconds:1748314152 nanos:530407631}" May 27 02:49:12.555654 kubelet[3467]: I0527 02:49:12.554741 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5979f88f5b-f6zrw" podStartSLOduration=27.313834379 podStartE2EDuration="38.554717687s" podCreationTimestamp="2025-05-27 02:48:34 +0000 UTC" firstStartedPulling="2025-05-27 02:49:00.406067962 +0000 UTC m=+54.866606157" lastFinishedPulling="2025-05-27 02:49:11.646951258 +0000 UTC m=+66.107489465" observedRunningTime="2025-05-27 02:49:12.473599762 +0000 UTC m=+66.934137993" watchObservedRunningTime="2025-05-27 02:49:12.554717687 +0000 UTC m=+67.015255906" May 27 02:49:13.516380 containerd[2001]: time="2025-05-27T02:49:13.516321131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:13.519095 containerd[2001]: time="2025-05-27T02:49:13.519035279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 27 02:49:13.522266 containerd[2001]: time="2025-05-27T02:49:13.522154751Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:13.526919 containerd[2001]: time="2025-05-27T02:49:13.526766388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:49:13.528345 containerd[2001]: time="2025-05-27T02:49:13.528127692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.880546146s" May 27 02:49:13.528345 containerd[2001]: time="2025-05-27T02:49:13.528192948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 27 02:49:13.533415 containerd[2001]: time="2025-05-27T02:49:13.532225776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 02:49:13.536340 containerd[2001]: time="2025-05-27T02:49:13.536271852Z" level=info msg="CreateContainer within sandbox \"84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 02:49:13.563353 containerd[2001]: time="2025-05-27T02:49:13.563282892Z" level=info msg="Container ce07dcf31ba09deb1fca1e419530c934c2107c38f2042d14fd8154c96fd53303: CDI devices from CRI Config.CDIDevices: []" May 27 02:49:13.590776 containerd[2001]: time="2025-05-27T02:49:13.590644944Z" level=info msg="CreateContainer within sandbox \"84e84282659bb8aa2bac8ec94421d44daf120681f060718858644f61ef1c4ec1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ce07dcf31ba09deb1fca1e419530c934c2107c38f2042d14fd8154c96fd53303\"" May 27 02:49:13.591748 containerd[2001]: time="2025-05-27T02:49:13.591693804Z" level=info msg="StartContainer for \"ce07dcf31ba09deb1fca1e419530c934c2107c38f2042d14fd8154c96fd53303\"" May 27 02:49:13.595112 containerd[2001]: time="2025-05-27T02:49:13.595048860Z" level=info msg="connecting to shim ce07dcf31ba09deb1fca1e419530c934c2107c38f2042d14fd8154c96fd53303" address="unix:///run/containerd/s/aacb8463505f880757018a3f35d19253061847b8ded5bb08aeb8d3e973ab62b3" protocol=ttrpc version=3 May 27 02:49:13.656440 systemd[1]: Started cri-containerd-ce07dcf31ba09deb1fca1e419530c934c2107c38f2042d14fd8154c96fd53303.scope - libcontainer container ce07dcf31ba09deb1fca1e419530c934c2107c38f2042d14fd8154c96fd53303. May 27 02:49:13.738358 containerd[2001]: time="2025-05-27T02:49:13.738198937Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:49:13.747113 containerd[2001]: time="2025-05-27T02:49:13.747052333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 02:49:13.752021 containerd[2001]: time="2025-05-27T02:49:13.747144901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:49:13.752021 containerd[2001]: time="2025-05-27T02:49:13.751175233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 02:49:13.752189 kubelet[3467]: E0527 02:49:13.747462 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:49:13.752189 kubelet[3467]: E0527 02:49:13.747543 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:49:13.752189 kubelet[3467]: E0527 02:49:13.747749 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6721adc515ad48b18fccbca3f868b1ac,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:49:13.767893 containerd[2001]: time="2025-05-27T02:49:13.767649505Z" level=info msg="StartContainer for \"ce07dcf31ba09deb1fca1e419530c934c2107c38f2042d14fd8154c96fd53303\" returns successfully" May 27 02:49:13.935386 containerd[2001]: time="2025-05-27T02:49:13.935249438Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:49:13.937597 containerd[2001]: time="2025-05-27T02:49:13.937531694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:49:13.937922 containerd[2001]: time="2025-05-27T02:49:13.937552070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 02:49:13.938009 kubelet[3467]: E0527 02:49:13.937886 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:49:13.938009 kubelet[3467]: E0527 02:49:13.937952 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:49:13.938191 kubelet[3467]: E0527 02:49:13.938110 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:49:13.940178 kubelet[3467]: E0527 02:49:13.939911 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:49:14.007189 kubelet[3467]: I0527 02:49:14.007112 3467 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 02:49:14.007718 kubelet[3467]: I0527 02:49:14.007379 3467 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 02:49:14.487463 kubelet[3467]: I0527 02:49:14.487233 3467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sx96z" podStartSLOduration=27.636927419 podStartE2EDuration="41.487153608s" podCreationTimestamp="2025-05-27 02:48:33 +0000 UTC" firstStartedPulling="2025-05-27 02:48:59.679460999 +0000 UTC m=+54.139999194" lastFinishedPulling="2025-05-27 02:49:13.529687104 +0000 UTC m=+67.990225383" observedRunningTime="2025-05-27 02:49:14.484539708 +0000 UTC m=+68.945078011" watchObservedRunningTime="2025-05-27 02:49:14.487153608 +0000 UTC m=+68.947691815" May 27 02:49:14.737293 systemd[1]: Started sshd@14-172.31.27.90:22-139.178.68.195:57464.service - OpenSSH per-connection server daemon (139.178.68.195:57464). May 27 02:49:14.960591 sshd[5799]: Accepted publickey for core from 139.178.68.195 port 57464 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:14.964803 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:14.978260 systemd-logind[1977]: New session 15 of user core. May 27 02:49:14.984448 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 02:49:15.324036 sshd[5801]: Connection closed by 139.178.68.195 port 57464 May 27 02:49:15.325069 sshd-session[5799]: pam_unix(sshd:session): session closed for user core May 27 02:49:15.330249 systemd[1]: sshd@14-172.31.27.90:22-139.178.68.195:57464.service: Deactivated successfully. May 27 02:49:15.334632 systemd[1]: session-15.scope: Deactivated successfully. May 27 02:49:15.339777 systemd-logind[1977]: Session 15 logged out. Waiting for processes to exit. May 27 02:49:15.342292 systemd-logind[1977]: Removed session 15. May 27 02:49:17.818661 containerd[2001]: time="2025-05-27T02:49:17.818514821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 02:49:18.021865 containerd[2001]: time="2025-05-27T02:49:18.021763634Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:49:18.024146 containerd[2001]: time="2025-05-27T02:49:18.024083006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:49:18.024553 containerd[2001]: time="2025-05-27T02:49:18.024209426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 02:49:18.024674 kubelet[3467]: E0527 02:49:18.024432 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:49:18.024674 kubelet[3467]: E0527 02:49:18.024498 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:49:18.025678 kubelet[3467]: E0527 02:49:18.024693 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdldh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-4qxmb_calico-system(be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:49:18.026269 kubelet[3467]: E0527 02:49:18.026153 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:49:20.365091 systemd[1]: Started sshd@15-172.31.27.90:22-139.178.68.195:57468.service - OpenSSH per-connection server daemon (139.178.68.195:57468). May 27 02:49:20.566389 sshd[5822]: Accepted publickey for core from 139.178.68.195 port 57468 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:20.568188 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:20.577956 systemd-logind[1977]: New session 16 of user core. May 27 02:49:20.586141 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 02:49:20.861997 sshd[5824]: Connection closed by 139.178.68.195 port 57468 May 27 02:49:20.863035 sshd-session[5822]: pam_unix(sshd:session): session closed for user core May 27 02:49:20.869595 systemd-logind[1977]: Session 16 logged out. Waiting for processes to exit. May 27 02:49:20.869759 systemd[1]: sshd@15-172.31.27.90:22-139.178.68.195:57468.service: Deactivated successfully. May 27 02:49:20.875441 systemd[1]: session-16.scope: Deactivated successfully. May 27 02:49:20.881191 systemd-logind[1977]: Removed session 16. May 27 02:49:23.311357 containerd[2001]: time="2025-05-27T02:49:23.311292548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\" id:\"3268dd9294846d02813bab00fed66223c66bee86f74d893264ca2167c16f1208\" pid:5850 exited_at:{seconds:1748314163 nanos:308569100}" May 27 02:49:25.900560 systemd[1]: Started sshd@16-172.31.27.90:22-139.178.68.195:60414.service - OpenSSH per-connection server daemon (139.178.68.195:60414). May 27 02:49:26.104067 sshd[5864]: Accepted publickey for core from 139.178.68.195 port 60414 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:26.106666 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:26.116221 systemd-logind[1977]: New session 17 of user core. May 27 02:49:26.122136 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 02:49:26.383576 sshd[5866]: Connection closed by 139.178.68.195 port 60414 May 27 02:49:26.385389 sshd-session[5864]: pam_unix(sshd:session): session closed for user core May 27 02:49:26.392001 systemd[1]: sshd@16-172.31.27.90:22-139.178.68.195:60414.service: Deactivated successfully. May 27 02:49:26.396762 systemd[1]: session-17.scope: Deactivated successfully. May 27 02:49:26.399171 systemd-logind[1977]: Session 17 logged out. Waiting for processes to exit. May 27 02:49:26.402546 systemd-logind[1977]: Removed session 17. May 27 02:49:26.819369 kubelet[3467]: E0527 02:49:26.819239 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:49:31.425133 systemd[1]: Started sshd@17-172.31.27.90:22-139.178.68.195:60430.service - OpenSSH per-connection server daemon (139.178.68.195:60430). May 27 02:49:31.640541 sshd[5883]: Accepted publickey for core from 139.178.68.195 port 60430 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:31.643217 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:31.651489 systemd-logind[1977]: New session 18 of user core. May 27 02:49:31.661109 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 02:49:31.923364 sshd[5885]: Connection closed by 139.178.68.195 port 60430 May 27 02:49:31.924432 sshd-session[5883]: pam_unix(sshd:session): session closed for user core May 27 02:49:31.931533 systemd[1]: sshd@17-172.31.27.90:22-139.178.68.195:60430.service: Deactivated successfully. May 27 02:49:31.937517 systemd[1]: session-18.scope: Deactivated successfully. May 27 02:49:31.939502 systemd-logind[1977]: Session 18 logged out. Waiting for processes to exit. May 27 02:49:31.943818 systemd-logind[1977]: Removed session 18. May 27 02:49:31.962183 systemd[1]: Started sshd@18-172.31.27.90:22-139.178.68.195:60436.service - OpenSSH per-connection server daemon (139.178.68.195:60436). May 27 02:49:32.161256 sshd[5897]: Accepted publickey for core from 139.178.68.195 port 60436 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:32.163775 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:32.173190 systemd-logind[1977]: New session 19 of user core. May 27 02:49:32.182127 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 02:49:32.744170 sshd[5899]: Connection closed by 139.178.68.195 port 60436 May 27 02:49:32.745143 sshd-session[5897]: pam_unix(sshd:session): session closed for user core May 27 02:49:32.752550 systemd[1]: sshd@18-172.31.27.90:22-139.178.68.195:60436.service: Deactivated successfully. May 27 02:49:32.757661 systemd[1]: session-19.scope: Deactivated successfully. May 27 02:49:32.763112 systemd-logind[1977]: Session 19 logged out. Waiting for processes to exit. May 27 02:49:32.781467 systemd[1]: Started sshd@19-172.31.27.90:22-139.178.68.195:60444.service - OpenSSH per-connection server daemon (139.178.68.195:60444). May 27 02:49:32.784988 systemd-logind[1977]: Removed session 19. May 27 02:49:32.817911 kubelet[3467]: E0527 02:49:32.817464 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:49:32.989171 sshd[5909]: Accepted publickey for core from 139.178.68.195 port 60444 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:32.991935 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:33.000727 systemd-logind[1977]: New session 20 of user core. May 27 02:49:33.009114 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 02:49:34.527164 sshd[5911]: Connection closed by 139.178.68.195 port 60444 May 27 02:49:34.528983 sshd-session[5909]: pam_unix(sshd:session): session closed for user core May 27 02:49:34.536716 systemd-logind[1977]: Session 20 logged out. Waiting for processes to exit. May 27 02:49:34.541245 systemd[1]: sshd@19-172.31.27.90:22-139.178.68.195:60444.service: Deactivated successfully. May 27 02:49:34.549748 systemd[1]: session-20.scope: Deactivated successfully. May 27 02:49:34.574991 systemd[1]: Started sshd@20-172.31.27.90:22-139.178.68.195:45124.service - OpenSSH per-connection server daemon (139.178.68.195:45124). May 27 02:49:34.580546 systemd-logind[1977]: Removed session 20. May 27 02:49:34.792044 sshd[5926]: Accepted publickey for core from 139.178.68.195 port 45124 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:34.798411 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:34.810351 systemd-logind[1977]: New session 21 of user core. May 27 02:49:34.818187 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 02:49:35.474116 sshd[5931]: Connection closed by 139.178.68.195 port 45124 May 27 02:49:35.474968 sshd-session[5926]: pam_unix(sshd:session): session closed for user core May 27 02:49:35.486262 systemd[1]: session-21.scope: Deactivated successfully. May 27 02:49:35.489731 systemd[1]: sshd@20-172.31.27.90:22-139.178.68.195:45124.service: Deactivated successfully. May 27 02:49:35.498968 systemd-logind[1977]: Session 21 logged out. Waiting for processes to exit. May 27 02:49:35.523954 systemd[1]: Started sshd@21-172.31.27.90:22-139.178.68.195:45138.service - OpenSSH per-connection server daemon (139.178.68.195:45138). May 27 02:49:35.529643 systemd-logind[1977]: Removed session 21. May 27 02:49:35.729232 sshd[5947]: Accepted publickey for core from 139.178.68.195 port 45138 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:35.733512 sshd-session[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:35.751947 systemd-logind[1977]: New session 22 of user core. May 27 02:49:35.758488 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 02:49:36.128955 sshd[5949]: Connection closed by 139.178.68.195 port 45138 May 27 02:49:36.128889 sshd-session[5947]: pam_unix(sshd:session): session closed for user core May 27 02:49:36.143061 systemd[1]: sshd@21-172.31.27.90:22-139.178.68.195:45138.service: Deactivated successfully. May 27 02:49:36.149387 systemd[1]: session-22.scope: Deactivated successfully. May 27 02:49:36.154094 systemd-logind[1977]: Session 22 logged out. Waiting for processes to exit. May 27 02:49:36.160115 systemd-logind[1977]: Removed session 22. May 27 02:49:40.819862 containerd[2001]: time="2025-05-27T02:49:40.819214179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 02:49:41.040773 containerd[2001]: time="2025-05-27T02:49:41.040694868Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:49:41.043141 containerd[2001]: time="2025-05-27T02:49:41.043077972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:49:41.043277 containerd[2001]: time="2025-05-27T02:49:41.043226424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 02:49:41.043611 kubelet[3467]: E0527 02:49:41.043520 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:49:41.044241 kubelet[3467]: E0527 02:49:41.043612 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:49:41.044241 kubelet[3467]: E0527 02:49:41.043779 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6721adc515ad48b18fccbca3f868b1ac,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:49:41.047894 containerd[2001]: time="2025-05-27T02:49:41.047716068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 02:49:41.170731 systemd[1]: Started sshd@22-172.31.27.90:22-139.178.68.195:45144.service - OpenSSH per-connection server daemon (139.178.68.195:45144). May 27 02:49:41.240955 containerd[2001]: time="2025-05-27T02:49:41.240884017Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:49:41.243643 containerd[2001]: time="2025-05-27T02:49:41.243422617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:49:41.243643 containerd[2001]: time="2025-05-27T02:49:41.243585721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 02:49:41.244509 kubelet[3467]: E0527 02:49:41.244425 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:49:41.244774 kubelet[3467]: E0527 02:49:41.244693 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:49:41.246021 kubelet[3467]: E0527 02:49:41.245926 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:49:41.247732 kubelet[3467]: E0527 02:49:41.247455 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:49:41.384534 sshd[5964]: Accepted publickey for core from 139.178.68.195 port 45144 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:41.387103 sshd-session[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:41.395731 systemd-logind[1977]: New session 23 of user core. May 27 02:49:41.403141 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 02:49:41.661981 sshd[5968]: Connection closed by 139.178.68.195 port 45144 May 27 02:49:41.661680 sshd-session[5964]: pam_unix(sshd:session): session closed for user core May 27 02:49:41.670058 systemd[1]: sshd@22-172.31.27.90:22-139.178.68.195:45144.service: Deactivated successfully. May 27 02:49:41.673515 systemd[1]: session-23.scope: Deactivated successfully. May 27 02:49:41.675907 systemd-logind[1977]: Session 23 logged out. Waiting for processes to exit. May 27 02:49:41.679694 systemd-logind[1977]: Removed session 23. May 27 02:49:42.514477 containerd[2001]: time="2025-05-27T02:49:42.514106788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3\" id:\"30300e77121b590f9759c1362a4ff1729261adc80e6358a339dbb064c17ff24d\" pid:5993 exited_at:{seconds:1748314182 nanos:513171567}" May 27 02:49:46.704243 systemd[1]: Started sshd@23-172.31.27.90:22-139.178.68.195:50362.service - OpenSSH per-connection server daemon (139.178.68.195:50362). May 27 02:49:46.914283 sshd[6004]: Accepted publickey for core from 139.178.68.195 port 50362 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:46.916999 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:46.925985 systemd-logind[1977]: New session 24 of user core. May 27 02:49:46.931183 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 02:49:47.296481 sshd[6006]: Connection closed by 139.178.68.195 port 50362 May 27 02:49:47.296356 sshd-session[6004]: pam_unix(sshd:session): session closed for user core May 27 02:49:47.306440 systemd[1]: sshd@23-172.31.27.90:22-139.178.68.195:50362.service: Deactivated successfully. May 27 02:49:47.312722 systemd[1]: session-24.scope: Deactivated successfully. May 27 02:49:47.322104 systemd-logind[1977]: Session 24 logged out. Waiting for processes to exit. May 27 02:49:47.325601 systemd-logind[1977]: Removed session 24. May 27 02:49:47.822871 containerd[2001]: time="2025-05-27T02:49:47.822629998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 02:49:48.016364 containerd[2001]: time="2025-05-27T02:49:48.016294855Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:49:48.018668 containerd[2001]: time="2025-05-27T02:49:48.018563275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:49:48.018942 containerd[2001]: time="2025-05-27T02:49:48.018580327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 02:49:48.019072 kubelet[3467]: E0527 02:49:48.018995 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:49:48.019648 kubelet[3467]: E0527 02:49:48.019088 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:49:48.019648 kubelet[3467]: E0527 02:49:48.019414 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdldh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-4qxmb_calico-system(be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:49:48.021188 kubelet[3467]: E0527 02:49:48.021105 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:49:49.808039 containerd[2001]: time="2025-05-27T02:49:49.807964368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3\" id:\"0da0e870632e03ab9b24a536d609e0d376eeb113a0de17d7dca68c65f13fbf15\" pid:6029 exited_at:{seconds:1748314189 nanos:806354316}" May 27 02:49:52.342342 systemd[1]: Started sshd@24-172.31.27.90:22-139.178.68.195:50372.service - OpenSSH per-connection server daemon (139.178.68.195:50372). May 27 02:49:52.568872 sshd[6040]: Accepted publickey for core from 139.178.68.195 port 50372 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:52.573912 sshd-session[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:52.589179 systemd-logind[1977]: New session 25 of user core. May 27 02:49:52.596267 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 02:49:52.917876 sshd[6042]: Connection closed by 139.178.68.195 port 50372 May 27 02:49:52.918367 sshd-session[6040]: pam_unix(sshd:session): session closed for user core May 27 02:49:52.928368 systemd[1]: sshd@24-172.31.27.90:22-139.178.68.195:50372.service: Deactivated successfully. May 27 02:49:52.935204 systemd[1]: session-25.scope: Deactivated successfully. May 27 02:49:52.940944 systemd-logind[1977]: Session 25 logged out. Waiting for processes to exit. May 27 02:49:52.948666 systemd-logind[1977]: Removed session 25. May 27 02:49:53.357889 containerd[2001]: time="2025-05-27T02:49:53.356826853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\" id:\"833df964bb3e98ab3c904f346de00ac07d530e7b79759bb05ecdac98605eef85\" pid:6066 exited_at:{seconds:1748314193 nanos:356458345}" May 27 02:49:53.823997 kubelet[3467]: E0527 02:49:53.823310 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:49:57.959284 systemd[1]: Started sshd@25-172.31.27.90:22-139.178.68.195:56358.service - OpenSSH per-connection server daemon (139.178.68.195:56358). May 27 02:49:58.165673 sshd[6079]: Accepted publickey for core from 139.178.68.195 port 56358 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:58.169819 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:58.180753 systemd-logind[1977]: New session 26 of user core. May 27 02:49:58.189134 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 02:49:58.538766 sshd[6081]: Connection closed by 139.178.68.195 port 56358 May 27 02:49:58.539251 sshd-session[6079]: pam_unix(sshd:session): session closed for user core May 27 02:49:58.549407 systemd[1]: session-26.scope: Deactivated successfully. May 27 02:49:58.552895 systemd[1]: sshd@25-172.31.27.90:22-139.178.68.195:56358.service: Deactivated successfully. May 27 02:49:58.563959 systemd-logind[1977]: Session 26 logged out. Waiting for processes to exit. May 27 02:49:58.569586 systemd-logind[1977]: Removed session 26. May 27 02:50:02.817935 kubelet[3467]: E0527 02:50:02.817850 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:50:03.583709 systemd[1]: Started sshd@26-172.31.27.90:22-139.178.68.195:53494.service - OpenSSH per-connection server daemon (139.178.68.195:53494). May 27 02:50:03.792649 sshd[6094]: Accepted publickey for core from 139.178.68.195 port 53494 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:03.795606 sshd-session[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:03.806257 systemd-logind[1977]: New session 27 of user core. May 27 02:50:03.813556 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 02:50:04.112938 sshd[6096]: Connection closed by 139.178.68.195 port 53494 May 27 02:50:04.113601 sshd-session[6094]: pam_unix(sshd:session): session closed for user core May 27 02:50:04.121118 systemd[1]: sshd@26-172.31.27.90:22-139.178.68.195:53494.service: Deactivated successfully. May 27 02:50:04.127497 systemd[1]: session-27.scope: Deactivated successfully. May 27 02:50:04.133799 systemd-logind[1977]: Session 27 logged out. Waiting for processes to exit. May 27 02:50:04.141940 systemd-logind[1977]: Removed session 27. May 27 02:50:07.822342 kubelet[3467]: E0527 02:50:07.822205 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:50:09.153260 systemd[1]: Started sshd@27-172.31.27.90:22-139.178.68.195:53498.service - OpenSSH per-connection server daemon (139.178.68.195:53498). May 27 02:50:09.369496 sshd[6111]: Accepted publickey for core from 139.178.68.195 port 53498 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:09.373593 sshd-session[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:09.383883 systemd-logind[1977]: New session 28 of user core. May 27 02:50:09.394393 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 02:50:09.716596 sshd[6113]: Connection closed by 139.178.68.195 port 53498 May 27 02:50:09.717512 sshd-session[6111]: pam_unix(sshd:session): session closed for user core May 27 02:50:09.726985 systemd[1]: session-28.scope: Deactivated successfully. May 27 02:50:09.733430 systemd[1]: sshd@27-172.31.27.90:22-139.178.68.195:53498.service: Deactivated successfully. May 27 02:50:09.735560 systemd-logind[1977]: Session 28 logged out. Waiting for processes to exit. May 27 02:50:09.745728 systemd-logind[1977]: Removed session 28. May 27 02:50:12.518691 containerd[2001]: time="2025-05-27T02:50:12.518507877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c38ce32c91dc9da74599ce377e4eb3d547e11a9d9f76c99ddb95b762ba1d8ec3\" id:\"ade71f751b8998889cd994f3a38d7d4d3775bbd2072146af49813e7fecc63d43\" pid:6138 exited_at:{seconds:1748314212 nanos:518090529}" May 27 02:50:17.816590 kubelet[3467]: E0527 02:50:17.816500 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c" May 27 02:50:22.816550 containerd[2001]: time="2025-05-27T02:50:22.816495656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 02:50:23.027459 containerd[2001]: time="2025-05-27T02:50:23.027375929Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:50:23.029695 containerd[2001]: time="2025-05-27T02:50:23.029625329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:50:23.029852 containerd[2001]: time="2025-05-27T02:50:23.029752973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 02:50:23.030106 kubelet[3467]: E0527 02:50:23.030043 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:50:23.030910 kubelet[3467]: E0527 02:50:23.030118 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:50:23.030910 kubelet[3467]: E0527 02:50:23.030311 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6721adc515ad48b18fccbca3f868b1ac,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:50:23.033126 containerd[2001]: time="2025-05-27T02:50:23.033043553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 02:50:23.233509 containerd[2001]: time="2025-05-27T02:50:23.233436810Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:50:23.236215 containerd[2001]: time="2025-05-27T02:50:23.236063262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:50:23.236733 containerd[2001]: time="2025-05-27T02:50:23.236150970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 02:50:23.237340 kubelet[3467]: E0527 02:50:23.237115 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:50:23.237340 kubelet[3467]: E0527 02:50:23.237285 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:50:23.237696 kubelet[3467]: E0527 02:50:23.237579 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fck8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f8f586ffd-2th8f_calico-system(caa565e1-9a46-42db-9b86-2b08e49a62f0): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:50:23.238998 kubelet[3467]: E0527 02:50:23.238895 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-7f8f586ffd-2th8f" podUID="caa565e1-9a46-42db-9b86-2b08e49a62f0" May 27 02:50:23.303768 containerd[2001]: time="2025-05-27T02:50:23.303690690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b30f334e9a6e9bbe68914e4c49e3f3292efd8d9e0745907c879d462a5c9a130b\" id:\"3339fb65fc5c0b2f8cd97adb0775c8b0c35eaa998e5d12adf2ac66e00b6c698d\" pid:6168 exited_at:{seconds:1748314223 nanos:302553942}" May 27 02:50:23.724493 systemd[1]: cri-containerd-21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57.scope: Deactivated successfully. May 27 02:50:23.725123 systemd[1]: cri-containerd-21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57.scope: Consumed 5.702s CPU time, 64.8M memory peak, 64K read from disk. May 27 02:50:23.739293 containerd[2001]: time="2025-05-27T02:50:23.739096400Z" level=info msg="received exit event container_id:\"21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57\" id:\"21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57\" pid:3127 exit_status:1 exited_at:{seconds:1748314223 nanos:737677508}" May 27 02:50:23.740173 containerd[2001]: time="2025-05-27T02:50:23.739988288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57\" id:\"21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57\" pid:3127 exit_status:1 exited_at:{seconds:1748314223 nanos:737677508}" May 27 02:50:23.787130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57-rootfs.mount: Deactivated successfully. May 27 02:50:24.726850 kubelet[3467]: I0527 02:50:24.726553 3467 scope.go:117] "RemoveContainer" containerID="21654bbe386a508627d109c6b4cbeb2d735f6b97bbd0afc3fa0a23fc411e7e57" May 27 02:50:24.731884 containerd[2001]: time="2025-05-27T02:50:24.731117673Z" level=info msg="CreateContainer within sandbox \"0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 27 02:50:24.750410 containerd[2001]: time="2025-05-27T02:50:24.750338337Z" level=info msg="Container 47caf0acb063f7887afab26303bef198515b07d148fea7ea083079539365a9cd: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:24.771913 containerd[2001]: time="2025-05-27T02:50:24.771769209Z" level=info msg="CreateContainer within sandbox \"0da6ee294ba46c9abe74c31d2938c09adb0f2efa489a7a86ac632d01c084857e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"47caf0acb063f7887afab26303bef198515b07d148fea7ea083079539365a9cd\"" May 27 02:50:24.772861 containerd[2001]: time="2025-05-27T02:50:24.772763313Z" level=info msg="StartContainer for \"47caf0acb063f7887afab26303bef198515b07d148fea7ea083079539365a9cd\"" May 27 02:50:24.775754 containerd[2001]: time="2025-05-27T02:50:24.775672173Z" level=info msg="connecting to shim 47caf0acb063f7887afab26303bef198515b07d148fea7ea083079539365a9cd" address="unix:///run/containerd/s/ba456ecbb3a039f4ee54aca62e497413e4f8d6fa2ce46dda3be8afb4effcaaad" protocol=ttrpc version=3 May 27 02:50:24.817191 systemd[1]: Started cri-containerd-47caf0acb063f7887afab26303bef198515b07d148fea7ea083079539365a9cd.scope - libcontainer container 47caf0acb063f7887afab26303bef198515b07d148fea7ea083079539365a9cd. May 27 02:50:24.883605 systemd[1]: cri-containerd-ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d.scope: Deactivated successfully. May 27 02:50:24.884389 systemd[1]: cri-containerd-ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d.scope: Consumed 22.890s CPU time, 89.4M memory peak. May 27 02:50:24.894135 containerd[2001]: time="2025-05-27T02:50:24.894069010Z" level=info msg="received exit event container_id:\"ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d\" id:\"ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d\" pid:3790 exit_status:1 exited_at:{seconds:1748314224 nanos:890734990}" May 27 02:50:24.896599 containerd[2001]: time="2025-05-27T02:50:24.896536546Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d\" id:\"ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d\" pid:3790 exit_status:1 exited_at:{seconds:1748314224 nanos:890734990}" May 27 02:50:24.938870 containerd[2001]: time="2025-05-27T02:50:24.938790118Z" level=info msg="StartContainer for \"47caf0acb063f7887afab26303bef198515b07d148fea7ea083079539365a9cd\" returns successfully" May 27 02:50:24.962102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d-rootfs.mount: Deactivated successfully. May 27 02:50:25.739142 kubelet[3467]: I0527 02:50:25.739089 3467 scope.go:117] "RemoveContainer" containerID="ee63982a68f1fc2b5adbf865ca249ffd0e829f98e2f0fbbb1abb7ea208e60c8d" May 27 02:50:25.742900 containerd[2001]: time="2025-05-27T02:50:25.742562902Z" level=info msg="CreateContainer within sandbox \"4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 27 02:50:25.762892 containerd[2001]: time="2025-05-27T02:50:25.761362282Z" level=info msg="Container 36e399402f0db3fb0d5c141ed136b3e5b49e22d88ef64321c61a4695d1d2cf1e: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:25.780900 containerd[2001]: time="2025-05-27T02:50:25.780796990Z" level=info msg="CreateContainer within sandbox \"4f62bb13d215bf0003910aee0a0b609716e187ff06216de0083c69af08dc0a5e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"36e399402f0db3fb0d5c141ed136b3e5b49e22d88ef64321c61a4695d1d2cf1e\"" May 27 02:50:25.782576 containerd[2001]: time="2025-05-27T02:50:25.782529202Z" level=info msg="StartContainer for \"36e399402f0db3fb0d5c141ed136b3e5b49e22d88ef64321c61a4695d1d2cf1e\"" May 27 02:50:25.785177 containerd[2001]: time="2025-05-27T02:50:25.785098618Z" level=info msg="connecting to shim 36e399402f0db3fb0d5c141ed136b3e5b49e22d88ef64321c61a4695d1d2cf1e" address="unix:///run/containerd/s/9eb22858d27a83f571d963a28854bb658ffe8a1903f640d7dd5e66cf5903b114" protocol=ttrpc version=3 May 27 02:50:25.839217 systemd[1]: Started cri-containerd-36e399402f0db3fb0d5c141ed136b3e5b49e22d88ef64321c61a4695d1d2cf1e.scope - libcontainer container 36e399402f0db3fb0d5c141ed136b3e5b49e22d88ef64321c61a4695d1d2cf1e. May 27 02:50:25.918124 containerd[2001]: time="2025-05-27T02:50:25.918039503Z" level=info msg="StartContainer for \"36e399402f0db3fb0d5c141ed136b3e5b49e22d88ef64321c61a4695d1d2cf1e\" returns successfully" May 27 02:50:28.229872 kubelet[3467]: E0527 02:50:28.229360 3467 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-90?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 27 02:50:29.661726 systemd[1]: cri-containerd-ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26.scope: Deactivated successfully. May 27 02:50:29.662296 systemd[1]: cri-containerd-ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26.scope: Consumed 5.652s CPU time, 22.4M memory peak, 88K read from disk. May 27 02:50:29.666816 containerd[2001]: time="2025-05-27T02:50:29.666625838Z" level=info msg="received exit event container_id:\"ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26\" id:\"ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26\" pid:3196 exit_status:1 exited_at:{seconds:1748314229 nanos:666069050}" May 27 02:50:29.666816 containerd[2001]: time="2025-05-27T02:50:29.666737138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26\" id:\"ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26\" pid:3196 exit_status:1 exited_at:{seconds:1748314229 nanos:666069050}" May 27 02:50:29.709574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26-rootfs.mount: Deactivated successfully. May 27 02:50:29.760607 kubelet[3467]: I0527 02:50:29.760553 3467 scope.go:117] "RemoveContainer" containerID="ddf3c31bef3691df2bc144f26dbd4212728ce25f8ac67affd5924b372ea48b26" May 27 02:50:29.769216 containerd[2001]: time="2025-05-27T02:50:29.769110134Z" level=info msg="CreateContainer within sandbox \"098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 27 02:50:29.793502 containerd[2001]: time="2025-05-27T02:50:29.793439006Z" level=info msg="Container 18cf45c9e95b13327586d2d586e246178282de05d89121ed7c050c6740d49d92: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:29.802705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683886371.mount: Deactivated successfully. May 27 02:50:29.814520 containerd[2001]: time="2025-05-27T02:50:29.814461542Z" level=info msg="CreateContainer within sandbox \"098418c46eab4ce4ff60284c0d6b650194fe305f55fb36a6b443df1f9d5a6904\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"18cf45c9e95b13327586d2d586e246178282de05d89121ed7c050c6740d49d92\"" May 27 02:50:29.816877 containerd[2001]: time="2025-05-27T02:50:29.815351810Z" level=info msg="StartContainer for \"18cf45c9e95b13327586d2d586e246178282de05d89121ed7c050c6740d49d92\"" May 27 02:50:29.817795 containerd[2001]: time="2025-05-27T02:50:29.817735394Z" level=info msg="connecting to shim 18cf45c9e95b13327586d2d586e246178282de05d89121ed7c050c6740d49d92" address="unix:///run/containerd/s/bdd260172242c4b9902cf38e0de3c5ba22aa077aa82f11b85f2390f8de7c5368" protocol=ttrpc version=3 May 27 02:50:29.864441 systemd[1]: Started cri-containerd-18cf45c9e95b13327586d2d586e246178282de05d89121ed7c050c6740d49d92.scope - libcontainer container 18cf45c9e95b13327586d2d586e246178282de05d89121ed7c050c6740d49d92. May 27 02:50:29.949569 containerd[2001]: time="2025-05-27T02:50:29.949193247Z" level=info msg="StartContainer for \"18cf45c9e95b13327586d2d586e246178282de05d89121ed7c050c6740d49d92\" returns successfully" May 27 02:50:32.816702 containerd[2001]: time="2025-05-27T02:50:32.816431693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 02:50:33.002272 containerd[2001]: time="2025-05-27T02:50:33.002168018Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:50:33.004447 containerd[2001]: time="2025-05-27T02:50:33.004387586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:50:33.004655 containerd[2001]: time="2025-05-27T02:50:33.004536194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 02:50:33.004755 kubelet[3467]: E0527 02:50:33.004704 3467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:50:33.005293 kubelet[3467]: E0527 02:50:33.004766 3467 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:50:33.005293 kubelet[3467]: E0527 02:50:33.004979 3467 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdldh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-4qxmb_calico-system(be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:50:33.006277 kubelet[3467]: E0527 02:50:33.006219 3467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-4qxmb" podUID="be3e00a4-5c9b-4f45-8c5d-bd5a86ce209c"