Sep 9 05:10:42.128456 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 9 05:10:42.128501 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 03:38:34 -00 2025 Sep 9 05:10:42.128526 kernel: KASLR disabled due to lack of seed Sep 9 05:10:42.128542 kernel: efi: EFI v2.7 by EDK II Sep 9 05:10:42.128557 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 9 05:10:42.128572 kernel: secureboot: Secure boot disabled Sep 9 05:10:42.128588 kernel: ACPI: Early table checksum verification disabled Sep 9 05:10:42.129054 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 9 05:10:42.129084 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 9 05:10:42.129100 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 9 05:10:42.129117 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 9 05:10:42.129139 kernel: ACPI: FACS 0x0000000078630000 000040 Sep 9 05:10:42.129154 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 9 05:10:42.129170 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 9 05:10:42.129283 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 9 05:10:42.129793 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 9 05:10:42.129837 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 9 05:10:42.129854 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 9 05:10:42.129870 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 9 05:10:42.129886 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 9 05:10:42.129902 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 9 05:10:42.129918 kernel: printk: legacy bootconsole [uart0] enabled Sep 9 05:10:42.129934 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 05:10:42.129950 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 05:10:42.129966 kernel: NODE_DATA(0) allocated [mem 0x4b584ca00-0x4b5853fff] Sep 9 05:10:42.129982 kernel: Zone ranges: Sep 9 05:10:42.129997 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 9 05:10:42.130017 kernel: DMA32 empty Sep 9 05:10:42.130032 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 9 05:10:42.130047 kernel: Device empty Sep 9 05:10:42.130063 kernel: Movable zone start for each node Sep 9 05:10:42.130078 kernel: Early memory node ranges Sep 9 05:10:42.130094 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 9 05:10:42.130109 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 9 05:10:42.130125 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 9 05:10:42.130140 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 9 05:10:42.130155 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 9 05:10:42.130171 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 9 05:10:42.130240 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 9 05:10:42.130264 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 9 05:10:42.130287 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 9 05:10:42.130305 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 9 05:10:42.130322 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Sep 9 05:10:42.130339 kernel: psci: probing for conduit method from ACPI. Sep 9 05:10:42.130361 kernel: psci: PSCIv1.0 detected in firmware. Sep 9 05:10:42.130378 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 05:10:42.130395 kernel: psci: Trusted OS migration not required Sep 9 05:10:42.130411 kernel: psci: SMC Calling Convention v1.1 Sep 9 05:10:42.130428 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 9 05:10:42.130444 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 05:10:42.130461 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 05:10:42.130478 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 05:10:42.130494 kernel: Detected PIPT I-cache on CPU0 Sep 9 05:10:42.130511 kernel: CPU features: detected: GIC system register CPU interface Sep 9 05:10:42.130527 kernel: CPU features: detected: Spectre-v2 Sep 9 05:10:42.130548 kernel: CPU features: detected: Spectre-v3a Sep 9 05:10:42.130564 kernel: CPU features: detected: Spectre-BHB Sep 9 05:10:42.130581 kernel: CPU features: detected: ARM erratum 1742098 Sep 9 05:10:42.130597 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 9 05:10:42.130613 kernel: alternatives: applying boot alternatives Sep 9 05:10:42.130632 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:10:42.130651 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:10:42.130668 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:10:42.130684 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:10:42.130700 kernel: Fallback order for Node 0: 0 Sep 9 05:10:42.130723 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Sep 9 05:10:42.130740 kernel: Policy zone: Normal Sep 9 05:10:42.130756 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:10:42.130772 kernel: software IO TLB: area num 2. Sep 9 05:10:42.130788 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Sep 9 05:10:42.130805 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 05:10:42.130821 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:10:42.130839 kernel: rcu: RCU event tracing is enabled. Sep 9 05:10:42.130855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 05:10:42.130872 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:10:42.130889 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:10:42.130906 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:10:42.130926 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 05:10:42.130943 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:10:42.130960 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:10:42.130976 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 05:10:42.130992 kernel: GICv3: 96 SPIs implemented Sep 9 05:10:42.131009 kernel: GICv3: 0 Extended SPIs implemented Sep 9 05:10:42.131025 kernel: Root IRQ handler: gic_handle_irq Sep 9 05:10:42.131041 kernel: GICv3: GICv3 features: 16 PPIs Sep 9 05:10:42.131057 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 05:10:42.131074 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 9 05:10:42.131090 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 9 05:10:42.131107 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 05:10:42.131128 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Sep 9 05:10:42.131144 kernel: GICv3: using LPI property table @0x0000000400110000 Sep 9 05:10:42.131160 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 9 05:10:42.131209 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Sep 9 05:10:42.131230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:10:42.131246 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 9 05:10:42.131263 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 9 05:10:42.131280 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 9 05:10:42.131297 kernel: Console: colour dummy device 80x25 Sep 9 05:10:42.131314 kernel: printk: legacy console [tty1] enabled Sep 9 05:10:42.131331 kernel: ACPI: Core revision 20240827 Sep 9 05:10:42.131354 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 9 05:10:42.131371 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:10:42.131388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:10:42.131404 kernel: landlock: Up and running. Sep 9 05:10:42.131421 kernel: SELinux: Initializing. Sep 9 05:10:42.131437 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:10:42.131454 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:10:42.131471 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:10:42.131488 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:10:42.131509 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:10:42.131525 kernel: Remapping and enabling EFI services. Sep 9 05:10:42.131542 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:10:42.131558 kernel: Detected PIPT I-cache on CPU1 Sep 9 05:10:42.131575 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 9 05:10:42.131592 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Sep 9 05:10:42.131609 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 9 05:10:42.131626 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 05:10:42.131643 kernel: SMP: Total of 2 processors activated. Sep 9 05:10:42.131671 kernel: CPU: All CPU(s) started at EL1 Sep 9 05:10:42.131689 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 05:10:42.131711 kernel: CPU features: detected: 32-bit EL1 Support Sep 9 05:10:42.131728 kernel: CPU features: detected: CRC32 instructions Sep 9 05:10:42.131745 kernel: alternatives: applying system-wide alternatives Sep 9 05:10:42.131763 kernel: Memory: 3797032K/4030464K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 212088K reserved, 16384K cma-reserved) Sep 9 05:10:42.131782 kernel: devtmpfs: initialized Sep 9 05:10:42.131803 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:10:42.131821 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 05:10:42.131839 kernel: 17040 pages in range for non-PLT usage Sep 9 05:10:42.131856 kernel: 508560 pages in range for PLT usage Sep 9 05:10:42.131873 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:10:42.131890 kernel: SMBIOS 3.0.0 present. Sep 9 05:10:42.131908 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 9 05:10:42.131925 kernel: DMI: Memory slots populated: 0/0 Sep 9 05:10:42.131942 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:10:42.131964 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 05:10:42.131982 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 05:10:42.131999 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 05:10:42.132017 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:10:42.132034 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Sep 9 05:10:42.132051 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:10:42.132069 kernel: cpuidle: using governor menu Sep 9 05:10:42.132086 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 05:10:42.132104 kernel: ASID allocator initialised with 65536 entries Sep 9 05:10:42.132125 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:10:42.132143 kernel: Serial: AMBA PL011 UART driver Sep 9 05:10:42.132160 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:10:42.132205 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:10:42.132226 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 05:10:42.132244 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 05:10:42.132262 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:10:42.132280 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:10:42.132298 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 05:10:42.132321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 05:10:42.132339 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:10:42.132356 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:10:42.132373 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:10:42.132391 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:10:42.132408 kernel: ACPI: Interpreter enabled Sep 9 05:10:42.132426 kernel: ACPI: Using GIC for interrupt routing Sep 9 05:10:42.132444 kernel: ACPI: MCFG table detected, 1 entries Sep 9 05:10:42.132462 kernel: ACPI: CPU0 has been hot-added Sep 9 05:10:42.132484 kernel: ACPI: CPU1 has been hot-added Sep 9 05:10:42.132502 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 9 05:10:42.132786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:10:42.132969 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 05:10:42.133147 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 05:10:42.133793 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 9 05:10:42.133984 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 9 05:10:42.134018 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 9 05:10:42.134037 kernel: acpiphp: Slot [1] registered Sep 9 05:10:42.134055 kernel: acpiphp: Slot [2] registered Sep 9 05:10:42.134073 kernel: acpiphp: Slot [3] registered Sep 9 05:10:42.134091 kernel: acpiphp: Slot [4] registered Sep 9 05:10:42.134108 kernel: acpiphp: Slot [5] registered Sep 9 05:10:42.134126 kernel: acpiphp: Slot [6] registered Sep 9 05:10:42.134143 kernel: acpiphp: Slot [7] registered Sep 9 05:10:42.134161 kernel: acpiphp: Slot [8] registered Sep 9 05:10:42.135931 kernel: acpiphp: Slot [9] registered Sep 9 05:10:42.135984 kernel: acpiphp: Slot [10] registered Sep 9 05:10:42.136003 kernel: acpiphp: Slot [11] registered Sep 9 05:10:42.136020 kernel: acpiphp: Slot [12] registered Sep 9 05:10:42.136038 kernel: acpiphp: Slot [13] registered Sep 9 05:10:42.136056 kernel: acpiphp: Slot [14] registered Sep 9 05:10:42.136074 kernel: acpiphp: Slot [15] registered Sep 9 05:10:42.136092 kernel: acpiphp: Slot [16] registered Sep 9 05:10:42.136109 kernel: acpiphp: Slot [17] registered Sep 9 05:10:42.136126 kernel: acpiphp: Slot [18] registered Sep 9 05:10:42.136149 kernel: acpiphp: Slot [19] registered Sep 9 05:10:42.136166 kernel: acpiphp: Slot [20] registered Sep 9 05:10:42.136212 kernel: acpiphp: Slot [21] registered Sep 9 05:10:42.136231 kernel: acpiphp: Slot [22] registered Sep 9 05:10:42.136248 kernel: acpiphp: Slot [23] registered Sep 9 05:10:42.136265 kernel: acpiphp: Slot [24] registered Sep 9 05:10:42.136283 kernel: acpiphp: Slot [25] registered Sep 9 05:10:42.136300 kernel: acpiphp: Slot [26] registered Sep 9 05:10:42.136317 kernel: acpiphp: Slot [27] registered Sep 9 05:10:42.136334 kernel: acpiphp: Slot [28] registered Sep 9 05:10:42.136358 kernel: acpiphp: Slot [29] registered Sep 9 05:10:42.136375 kernel: acpiphp: Slot [30] registered Sep 9 05:10:42.136392 kernel: acpiphp: Slot [31] registered Sep 9 05:10:42.136409 kernel: PCI host bridge to bus 0000:00 Sep 9 05:10:42.136633 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 9 05:10:42.136799 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 05:10:42.136964 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 9 05:10:42.137131 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 9 05:10:42.142252 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:10:42.142506 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Sep 9 05:10:42.142699 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Sep 9 05:10:42.142898 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Sep 9 05:10:42.143086 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Sep 9 05:10:42.143315 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 05:10:42.143534 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Sep 9 05:10:42.143723 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Sep 9 05:10:42.143908 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Sep 9 05:10:42.144093 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Sep 9 05:10:42.147289 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 05:10:42.147540 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Sep 9 05:10:42.147734 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Sep 9 05:10:42.147936 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Sep 9 05:10:42.148124 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Sep 9 05:10:42.148396 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Sep 9 05:10:42.148575 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 9 05:10:42.148747 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 05:10:42.148918 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 9 05:10:42.148945 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 05:10:42.148974 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 05:10:42.148994 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 05:10:42.149013 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 05:10:42.149032 kernel: iommu: Default domain type: Translated Sep 9 05:10:42.149050 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 05:10:42.149069 kernel: efivars: Registered efivars operations Sep 9 05:10:42.149087 kernel: vgaarb: loaded Sep 9 05:10:42.149105 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 05:10:42.149124 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:10:42.149146 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:10:42.149169 kernel: pnp: PnP ACPI init Sep 9 05:10:42.150897 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 9 05:10:42.151031 kernel: pnp: PnP ACPI: found 1 devices Sep 9 05:10:42.151399 kernel: NET: Registered PF_INET protocol family Sep 9 05:10:42.151555 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:10:42.151577 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:10:42.151595 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:10:42.151622 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:10:42.151640 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:10:42.151658 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:10:42.151675 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:10:42.151693 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:10:42.151710 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:10:42.151728 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:10:42.151745 kernel: kvm [1]: HYP mode not available Sep 9 05:10:42.151762 kernel: Initialise system trusted keyrings Sep 9 05:10:42.151784 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:10:42.151802 kernel: Key type asymmetric registered Sep 9 05:10:42.151820 kernel: Asymmetric key parser 'x509' registered Sep 9 05:10:42.151838 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 05:10:42.151856 kernel: io scheduler mq-deadline registered Sep 9 05:10:42.151874 kernel: io scheduler kyber registered Sep 9 05:10:42.151892 kernel: io scheduler bfq registered Sep 9 05:10:42.152113 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 9 05:10:42.152144 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 05:10:42.152163 kernel: ACPI: button: Power Button [PWRB] Sep 9 05:10:42.152201 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 9 05:10:42.152222 kernel: ACPI: button: Sleep Button [SLPB] Sep 9 05:10:42.152240 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:10:42.152259 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 9 05:10:42.152451 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 9 05:10:42.152477 kernel: printk: legacy console [ttyS0] disabled Sep 9 05:10:42.152495 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 9 05:10:42.152519 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:10:42.152537 kernel: printk: legacy bootconsole [uart0] disabled Sep 9 05:10:42.152555 kernel: thunder_xcv, ver 1.0 Sep 9 05:10:42.152572 kernel: thunder_bgx, ver 1.0 Sep 9 05:10:42.152590 kernel: nicpf, ver 1.0 Sep 9 05:10:42.152607 kernel: nicvf, ver 1.0 Sep 9 05:10:42.152794 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 05:10:42.152967 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T05:10:41 UTC (1757394641) Sep 9 05:10:42.152996 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 05:10:42.153014 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Sep 9 05:10:42.153032 kernel: watchdog: NMI not fully supported Sep 9 05:10:42.153049 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:10:42.153067 kernel: watchdog: Hard watchdog permanently disabled Sep 9 05:10:42.153084 kernel: Segment Routing with IPv6 Sep 9 05:10:42.153101 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:10:42.153119 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:10:42.153136 kernel: Key type dns_resolver registered Sep 9 05:10:42.153157 kernel: registered taskstats version 1 Sep 9 05:10:42.153217 kernel: Loading compiled-in X.509 certificates Sep 9 05:10:42.153241 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 44d1e8b5c5ffbaa3cedd99c03d41580671fabec5' Sep 9 05:10:42.153258 kernel: Demotion targets for Node 0: null Sep 9 05:10:42.153276 kernel: Key type .fscrypt registered Sep 9 05:10:42.153293 kernel: Key type fscrypt-provisioning registered Sep 9 05:10:42.153310 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:10:42.153327 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:10:42.153345 kernel: ima: No architecture policies found Sep 9 05:10:42.153368 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 05:10:42.153386 kernel: clk: Disabling unused clocks Sep 9 05:10:42.153403 kernel: PM: genpd: Disabling unused power domains Sep 9 05:10:42.153420 kernel: Warning: unable to open an initial console. Sep 9 05:10:42.153438 kernel: Freeing unused kernel memory: 38976K Sep 9 05:10:42.153456 kernel: Run /init as init process Sep 9 05:10:42.153473 kernel: with arguments: Sep 9 05:10:42.153490 kernel: /init Sep 9 05:10:42.153507 kernel: with environment: Sep 9 05:10:42.153524 kernel: HOME=/ Sep 9 05:10:42.153545 kernel: TERM=linux Sep 9 05:10:42.153562 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:10:42.153581 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:10:42.153605 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:10:42.153625 systemd[1]: Detected virtualization amazon. Sep 9 05:10:42.153644 systemd[1]: Detected architecture arm64. Sep 9 05:10:42.153663 systemd[1]: Running in initrd. Sep 9 05:10:42.153686 systemd[1]: No hostname configured, using default hostname. Sep 9 05:10:42.153706 systemd[1]: Hostname set to . Sep 9 05:10:42.153725 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:10:42.153743 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:10:42.153763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:10:42.153782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:10:42.153802 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:10:42.153822 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:10:42.153846 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:10:42.153867 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:10:42.153888 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:10:42.153908 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:10:42.153927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:10:42.153947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:10:42.153966 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:10:42.153990 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:10:42.154009 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:10:42.154028 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:10:42.154047 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:10:42.154067 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:10:42.154086 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:10:42.154105 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:10:42.154124 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:10:42.154148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:10:42.154167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:10:42.154207 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:10:42.154227 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:10:42.154247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:10:42.154267 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:10:42.154286 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:10:42.154306 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:10:42.154325 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:10:42.154350 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:10:42.154370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:10:42.154389 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:10:42.154410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:10:42.154433 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:10:42.154453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:10:42.154508 systemd-journald[257]: Collecting audit messages is disabled. Sep 9 05:10:42.154550 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:10:42.154574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:10:42.154607 kernel: Bridge firewalling registered Sep 9 05:10:42.154631 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:10:42.154651 systemd-journald[257]: Journal started Sep 9 05:10:42.154691 systemd-journald[257]: Runtime Journal (/run/log/journal/ec251c600aefb4a79316bab665077163) is 8M, max 75.3M, 67.3M free. Sep 9 05:10:42.086737 systemd-modules-load[259]: Inserted module 'overlay' Sep 9 05:10:42.159109 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:10:42.138955 systemd-modules-load[259]: Inserted module 'br_netfilter' Sep 9 05:10:42.167042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:10:42.174790 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:10:42.192436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:10:42.201846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:10:42.203999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:10:42.236587 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:10:42.239602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:10:42.257125 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:10:42.261902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:10:42.272599 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:10:42.278832 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:10:42.298385 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:10:42.332144 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 05:10:42.394673 systemd-resolved[300]: Positive Trust Anchors: Sep 9 05:10:42.394708 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:10:42.394770 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:10:42.494215 kernel: SCSI subsystem initialized Sep 9 05:10:42.501220 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:10:42.514576 kernel: iscsi: registered transport (tcp) Sep 9 05:10:42.536421 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:10:42.536495 kernel: QLogic iSCSI HBA Driver Sep 9 05:10:42.572409 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:10:42.610545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:10:42.619798 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:10:42.682204 kernel: random: crng init done Sep 9 05:10:42.682510 systemd-resolved[300]: Defaulting to hostname 'linux'. Sep 9 05:10:42.691096 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:10:42.694974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:10:42.720280 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:10:42.726628 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:10:42.816225 kernel: raid6: neonx8 gen() 6411 MB/s Sep 9 05:10:42.833218 kernel: raid6: neonx4 gen() 6448 MB/s Sep 9 05:10:42.850210 kernel: raid6: neonx2 gen() 5342 MB/s Sep 9 05:10:42.867210 kernel: raid6: neonx1 gen() 3911 MB/s Sep 9 05:10:42.884209 kernel: raid6: int64x8 gen() 3610 MB/s Sep 9 05:10:42.901215 kernel: raid6: int64x4 gen() 3659 MB/s Sep 9 05:10:42.918208 kernel: raid6: int64x2 gen() 3537 MB/s Sep 9 05:10:42.936168 kernel: raid6: int64x1 gen() 2756 MB/s Sep 9 05:10:42.936224 kernel: raid6: using algorithm neonx4 gen() 6448 MB/s Sep 9 05:10:42.954232 kernel: raid6: .... xor() 4887 MB/s, rmw enabled Sep 9 05:10:42.954309 kernel: raid6: using neon recovery algorithm Sep 9 05:10:42.962226 kernel: xor: measuring software checksum speed Sep 9 05:10:42.964604 kernel: 8regs : 11810 MB/sec Sep 9 05:10:42.964680 kernel: 32regs : 12168 MB/sec Sep 9 05:10:42.965989 kernel: arm64_neon : 8938 MB/sec Sep 9 05:10:42.966042 kernel: xor: using function: 32regs (12168 MB/sec) Sep 9 05:10:43.060222 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:10:43.073254 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:10:43.076858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:10:43.136928 systemd-udevd[508]: Using default interface naming scheme 'v255'. Sep 9 05:10:43.146876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:10:43.165746 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:10:43.210148 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Sep 9 05:10:43.257289 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:10:43.267481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:10:43.419885 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:10:43.428614 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:10:43.600725 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 05:10:43.600796 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 9 05:10:43.601502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:10:43.614553 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 9 05:10:43.614845 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 9 05:10:43.602552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:10:43.614422 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:10:43.627986 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f3:69:82:bd:99 Sep 9 05:10:43.630562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:10:43.630575 (udev-worker)[567]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:10:43.633965 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:10:43.655884 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 9 05:10:43.655930 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 9 05:10:43.667232 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 9 05:10:43.676317 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:10:43.676385 kernel: GPT:9289727 != 16777215 Sep 9 05:10:43.676409 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:10:43.677538 kernel: GPT:9289727 != 16777215 Sep 9 05:10:43.678611 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:10:43.680221 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:10:43.682860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:10:43.751225 kernel: nvme nvme0: using unchecked data buffer Sep 9 05:10:43.848857 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 9 05:10:43.912598 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 9 05:10:43.919371 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 9 05:10:43.947588 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:10:43.984264 disk-uuid[676]: Primary Header is updated. Sep 9 05:10:43.984264 disk-uuid[676]: Secondary Entries is updated. Sep 9 05:10:43.984264 disk-uuid[676]: Secondary Header is updated. Sep 9 05:10:44.039565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 05:10:44.072776 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 9 05:10:44.365663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:10:44.379656 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:10:44.385612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:10:44.385753 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:10:44.395879 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:10:44.434242 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:10:45.022441 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:10:45.024766 disk-uuid[678]: The operation has completed successfully. Sep 9 05:10:45.207953 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:10:45.208212 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:10:45.292582 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:10:45.311625 sh[958]: Success Sep 9 05:10:45.341047 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:10:45.341121 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:10:45.341212 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:10:45.355225 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 05:10:45.495094 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:10:45.510984 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:10:45.521273 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:10:45.559243 kernel: BTRFS: device fsid 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (981) Sep 9 05:10:45.563544 kernel: BTRFS info (device dm-0): first mount of filesystem 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 Sep 9 05:10:45.563612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:10:45.702434 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 05:10:45.702503 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:10:45.702529 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:10:45.771308 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:10:45.771735 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:10:45.780287 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:10:45.781448 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:10:45.795911 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:10:45.855223 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Sep 9 05:10:45.860004 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:10:45.860084 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:10:45.902789 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:10:45.902863 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:10:45.913233 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:10:45.914152 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:10:45.924400 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:10:45.987850 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:10:45.998340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:10:46.081073 systemd-networkd[1150]: lo: Link UP Sep 9 05:10:46.081624 systemd-networkd[1150]: lo: Gained carrier Sep 9 05:10:46.084584 systemd-networkd[1150]: Enumeration completed Sep 9 05:10:46.086627 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:10:46.086634 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:10:46.088372 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:10:46.099998 systemd[1]: Reached target network.target - Network. Sep 9 05:10:46.114590 systemd-networkd[1150]: eth0: Link UP Sep 9 05:10:46.114602 systemd-networkd[1150]: eth0: Gained carrier Sep 9 05:10:46.114624 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:10:46.140253 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.30.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 05:10:46.611275 ignition[1106]: Ignition 2.22.0 Sep 9 05:10:46.611305 ignition[1106]: Stage: fetch-offline Sep 9 05:10:46.615274 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:10:46.615310 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:10:46.617770 ignition[1106]: Ignition finished successfully Sep 9 05:10:46.620996 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:10:46.633673 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 05:10:46.682373 ignition[1161]: Ignition 2.22.0 Sep 9 05:10:46.682411 ignition[1161]: Stage: fetch Sep 9 05:10:46.683018 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:10:46.683042 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:10:46.683234 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:10:46.697559 ignition[1161]: PUT result: OK Sep 9 05:10:46.704322 ignition[1161]: parsed url from cmdline: "" Sep 9 05:10:46.704338 ignition[1161]: no config URL provided Sep 9 05:10:46.704352 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:10:46.704376 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:10:46.704406 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:10:46.706485 ignition[1161]: PUT result: OK Sep 9 05:10:46.706579 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 9 05:10:46.716348 ignition[1161]: GET result: OK Sep 9 05:10:46.716536 ignition[1161]: parsing config with SHA512: 660e44d13f3bc34a38145fb6803d015733a099aefb51991d9b307af629f7cd822b62c2ab1ef947d29d83bbf68724d74b96d2d5e61fb461a9bc4a82568037060d Sep 9 05:10:46.731950 unknown[1161]: fetched base config from "system" Sep 9 05:10:46.733039 ignition[1161]: fetch: fetch complete Sep 9 05:10:46.731983 unknown[1161]: fetched base config from "system" Sep 9 05:10:46.733061 ignition[1161]: fetch: fetch passed Sep 9 05:10:46.731998 unknown[1161]: fetched user config from "aws" Sep 9 05:10:46.734401 ignition[1161]: Ignition finished successfully Sep 9 05:10:46.748254 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 05:10:46.757540 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:10:46.823312 ignition[1167]: Ignition 2.22.0 Sep 9 05:10:46.823803 ignition[1167]: Stage: kargs Sep 9 05:10:46.824351 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:10:46.824374 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:10:46.824504 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:10:46.837948 ignition[1167]: PUT result: OK Sep 9 05:10:46.848294 ignition[1167]: kargs: kargs passed Sep 9 05:10:46.848390 ignition[1167]: Ignition finished successfully Sep 9 05:10:46.854510 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:10:46.861673 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:10:46.910691 ignition[1173]: Ignition 2.22.0 Sep 9 05:10:46.910723 ignition[1173]: Stage: disks Sep 9 05:10:46.911285 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:10:46.911309 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:10:46.911453 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:10:46.913798 ignition[1173]: PUT result: OK Sep 9 05:10:46.928480 ignition[1173]: disks: disks passed Sep 9 05:10:46.928573 ignition[1173]: Ignition finished successfully Sep 9 05:10:46.933640 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:10:46.939676 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:10:46.942936 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:10:46.951521 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:10:46.954317 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:10:46.961900 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:10:46.966479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:10:47.087836 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:10:47.093322 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:10:47.102823 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:10:47.273209 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 88574756-967d-44b3-be66-46689c8baf27 r/w with ordered data mode. Quota mode: none. Sep 9 05:10:47.274785 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:10:47.279787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:10:47.287089 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:10:47.297447 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:10:47.303883 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:10:47.303983 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:10:47.304030 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:10:47.329879 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:10:47.332994 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:10:47.356223 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Sep 9 05:10:47.360730 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:10:47.360790 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:10:47.367920 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:10:47.367984 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:10:47.370473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:10:47.805608 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:10:47.833683 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:10:47.843526 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:10:47.852486 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:10:47.914367 systemd-networkd[1150]: eth0: Gained IPv6LL Sep 9 05:10:48.187358 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:10:48.192982 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:10:48.204391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:10:48.224625 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:10:48.231253 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:10:48.266595 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:10:48.285930 ignition[1313]: INFO : Ignition 2.22.0 Sep 9 05:10:48.288157 ignition[1313]: INFO : Stage: mount Sep 9 05:10:48.288157 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:10:48.288157 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:10:48.288157 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:10:48.299156 ignition[1313]: INFO : PUT result: OK Sep 9 05:10:48.307420 ignition[1313]: INFO : mount: mount passed Sep 9 05:10:48.309750 ignition[1313]: INFO : Ignition finished successfully Sep 9 05:10:48.314524 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:10:48.321926 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:10:48.362544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:10:48.408224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Sep 9 05:10:48.413205 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 05:10:48.413270 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 9 05:10:48.419932 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:10:48.420009 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:10:48.423974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:10:48.489805 ignition[1343]: INFO : Ignition 2.22.0 Sep 9 05:10:48.489805 ignition[1343]: INFO : Stage: files Sep 9 05:10:48.495220 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:10:48.495220 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:10:48.495220 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:10:48.495220 ignition[1343]: INFO : PUT result: OK Sep 9 05:10:48.509469 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:10:48.509469 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:10:48.509469 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:10:48.520337 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:10:48.520337 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:10:48.520337 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:10:48.512861 unknown[1343]: wrote ssh authorized keys file for user: core Sep 9 05:10:48.533133 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 05:10:48.533133 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 05:10:48.632116 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:10:49.288162 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 05:10:49.294927 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:10:49.299464 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:10:49.303892 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:10:49.308326 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:10:49.308326 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:10:49.317004 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:10:49.317004 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:10:49.326141 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:10:49.335065 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:10:49.339754 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:10:49.344325 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:10:49.353008 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:10:49.359684 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:10:49.359684 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 05:10:49.885399 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 05:10:50.261633 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 05:10:50.261633 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 05:10:50.272459 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:10:50.277703 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:10:50.282634 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 05:10:50.282634 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:10:50.282634 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:10:50.282634 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:10:50.282634 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:10:50.282634 ignition[1343]: INFO : files: files passed Sep 9 05:10:50.282634 ignition[1343]: INFO : Ignition finished successfully Sep 9 05:10:50.309267 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:10:50.316717 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:10:50.328754 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:10:50.341935 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:10:50.344883 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:10:50.366535 initrd-setup-root-after-ignition[1373]: grep: Sep 9 05:10:50.366535 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:10:50.372724 initrd-setup-root-after-ignition[1373]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:10:50.372724 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:10:50.382353 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:10:50.389975 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:10:50.397028 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:10:50.478901 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:10:50.479209 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:10:50.491973 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:10:50.496758 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:10:50.502323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:10:50.507761 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:10:50.561476 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:10:50.570382 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:10:50.609690 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:10:50.615779 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:10:50.619160 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:10:50.626625 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:10:50.627059 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:10:50.635056 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:10:50.640507 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:10:50.644696 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:10:50.650355 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:10:50.653301 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:10:50.661904 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:10:50.665575 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:10:50.671483 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:10:50.674951 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:10:50.683273 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:10:50.690769 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:10:50.695079 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:10:50.695455 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:10:50.703025 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:10:50.708653 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:10:50.711568 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:10:50.714688 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:10:50.718929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:10:50.719256 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:10:50.724094 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:10:50.724780 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:10:50.730399 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:10:50.730602 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:10:50.738606 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:10:50.747067 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:10:50.747346 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:10:50.767538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:10:50.773301 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:10:50.776367 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:10:50.786487 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:10:50.789484 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:10:50.805638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:10:50.808727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:10:50.831671 ignition[1397]: INFO : Ignition 2.22.0 Sep 9 05:10:50.831671 ignition[1397]: INFO : Stage: umount Sep 9 05:10:50.843715 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:10:50.843715 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:10:50.843715 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:10:50.843715 ignition[1397]: INFO : PUT result: OK Sep 9 05:10:50.840246 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:10:50.860446 ignition[1397]: INFO : umount: umount passed Sep 9 05:10:50.860446 ignition[1397]: INFO : Ignition finished successfully Sep 9 05:10:50.858341 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:10:50.858530 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:10:50.863702 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:10:50.863865 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:10:50.869467 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:10:50.869577 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:10:50.872154 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 05:10:50.872292 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 05:10:50.875988 systemd[1]: Stopped target network.target - Network. Sep 9 05:10:50.882804 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:10:50.882923 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:10:50.888517 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:10:50.890944 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:10:50.893400 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:10:50.893647 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:10:50.900985 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:10:50.904282 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:10:50.904362 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:10:50.914231 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:10:50.914309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:10:50.918588 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:10:50.918699 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:10:50.923555 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:10:50.923642 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:10:50.926681 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:10:50.931366 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:10:50.956525 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:10:50.956747 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:10:51.026368 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:10:51.032498 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:10:51.035601 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:10:51.044080 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:10:51.048138 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:10:51.051109 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:10:51.051208 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:10:51.065336 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:10:51.068001 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:10:51.068139 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:10:51.073774 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:10:51.073869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:10:51.083359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:10:51.086391 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:10:51.088147 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:10:51.088335 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:10:51.099510 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:10:51.103436 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:10:51.103599 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:10:51.140097 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:10:51.145911 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:10:51.156915 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:10:51.157559 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:10:51.163894 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:10:51.164198 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:10:51.170664 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:10:51.170823 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:10:51.177158 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:10:51.177266 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:10:51.180598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:10:51.180696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:10:51.189663 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:10:51.189772 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:10:51.202937 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:10:51.203048 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:10:51.222124 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:10:51.228499 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:10:51.228650 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:10:51.245397 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:10:51.245514 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:10:51.252388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:10:51.252492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:10:51.261061 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:10:51.261464 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:10:51.261563 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:10:51.262351 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:10:51.264145 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:10:51.290459 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:10:51.292365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:10:51.295440 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:10:51.296933 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:10:51.332151 systemd[1]: Switching root. Sep 9 05:10:51.376844 systemd-journald[257]: Journal stopped Sep 9 05:10:53.824687 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Sep 9 05:10:53.824817 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:10:53.824857 kernel: SELinux: policy capability open_perms=1 Sep 9 05:10:53.824886 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:10:53.824914 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:10:53.824942 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:10:53.824970 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:10:53.825006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:10:53.825036 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:10:53.825067 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:10:53.825096 kernel: audit: type=1403 audit(1757394651.806:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:10:53.825157 systemd[1]: Successfully loaded SELinux policy in 97.559ms. Sep 9 05:10:53.825230 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.947ms. Sep 9 05:10:53.825267 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:10:53.825297 systemd[1]: Detected virtualization amazon. Sep 9 05:10:53.825327 systemd[1]: Detected architecture arm64. Sep 9 05:10:53.825355 systemd[1]: Detected first boot. Sep 9 05:10:53.825383 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:10:53.825417 zram_generator::config[1456]: No configuration found. Sep 9 05:10:53.825449 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:10:53.825481 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:10:53.825512 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:10:53.825543 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:10:53.825574 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:10:53.825604 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:10:53.825635 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:10:53.825669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:10:53.825701 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:10:53.825731 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:10:53.825763 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:10:53.825794 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:10:53.825823 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:10:53.825856 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:10:53.825884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:10:53.825911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:10:53.825943 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:10:53.825971 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:10:53.826002 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:10:53.826032 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:10:53.826061 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:10:53.826090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:10:53.826121 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:10:53.826152 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:10:53.830284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:10:53.830340 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:10:53.830382 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:10:53.830415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:10:53.830446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:10:53.830476 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:10:53.830507 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:10:53.830537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:10:53.830566 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:10:53.830600 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:10:53.830631 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:10:53.830659 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:10:53.830688 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:10:53.830716 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:10:53.830744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:10:53.830773 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:10:53.830802 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:10:53.830830 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:10:53.830861 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:10:53.830889 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:10:53.830921 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:10:53.830951 systemd[1]: Reached target machines.target - Containers. Sep 9 05:10:53.830981 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:10:53.831009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:10:53.831039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:10:53.831067 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:10:53.831099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:10:53.831127 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:10:53.831154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:10:53.831204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:10:53.831238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:10:53.831267 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:10:53.831295 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:10:53.831324 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:10:53.831357 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:10:53.831388 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:10:53.831417 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:10:53.831445 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:10:53.831473 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:10:53.831500 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:10:53.831528 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:10:53.831558 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:10:53.831587 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:10:53.831619 kernel: loop: module loaded Sep 9 05:10:53.831647 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:10:53.831674 systemd[1]: Stopped verity-setup.service. Sep 9 05:10:53.831704 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:10:53.831735 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:10:53.831768 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:10:53.831796 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:10:53.831827 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:10:53.831855 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:10:53.831885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:10:53.831912 kernel: fuse: init (API version 7.41) Sep 9 05:10:53.831942 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:10:53.831972 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:10:53.832000 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:10:53.832030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:10:53.832060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:10:53.832088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:10:53.832115 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:10:53.832143 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:10:53.837209 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:10:53.837274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:10:53.837358 systemd-journald[1546]: Collecting audit messages is disabled. Sep 9 05:10:53.837415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:10:53.837446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:10:53.837475 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:10:53.837503 systemd-journald[1546]: Journal started Sep 9 05:10:53.837555 systemd-journald[1546]: Runtime Journal (/run/log/journal/ec251c600aefb4a79316bab665077163) is 8M, max 75.3M, 67.3M free. Sep 9 05:10:53.201159 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:10:53.213033 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 9 05:10:53.213872 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:10:53.846279 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:10:53.845439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:10:53.851801 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:10:53.869713 kernel: ACPI: bus type drm_connector registered Sep 9 05:10:53.870261 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:10:53.870726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:10:53.889695 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:10:53.898349 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:10:53.907629 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:10:53.914641 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:10:53.914718 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:10:53.924377 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:10:53.939534 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:10:53.944688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:10:53.948457 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:10:53.959016 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:10:53.963561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:10:53.965374 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:10:53.969704 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:10:53.972116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:10:53.985347 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:10:53.993606 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:10:54.005921 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:10:54.011945 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:10:54.033710 systemd-journald[1546]: Time spent on flushing to /var/log/journal/ec251c600aefb4a79316bab665077163 is 74.283ms for 931 entries. Sep 9 05:10:54.033710 systemd-journald[1546]: System Journal (/var/log/journal/ec251c600aefb4a79316bab665077163) is 8M, max 195.6M, 187.6M free. Sep 9 05:10:54.122026 systemd-journald[1546]: Received client request to flush runtime journal. Sep 9 05:10:54.122120 kernel: loop0: detected capacity change from 0 to 119368 Sep 9 05:10:54.075353 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:10:54.081310 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:10:54.091590 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:10:54.099289 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:10:54.127023 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:10:54.161719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:10:54.190313 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:10:54.214246 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:10:54.219415 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:10:54.225568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:10:54.279222 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:10:54.291332 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Sep 9 05:10:54.291372 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Sep 9 05:10:54.298921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:10:54.303615 kernel: loop1: detected capacity change from 0 to 207008 Sep 9 05:10:54.611216 kernel: loop2: detected capacity change from 0 to 61264 Sep 9 05:10:55.084034 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:10:55.091603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:10:55.127218 kernel: loop3: detected capacity change from 0 to 100632 Sep 9 05:10:55.150420 systemd-udevd[1612]: Using default interface naming scheme 'v255'. Sep 9 05:10:55.205723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:10:55.217409 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:10:55.247134 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:10:55.364292 kernel: loop4: detected capacity change from 0 to 119368 Sep 9 05:10:55.403253 kernel: loop5: detected capacity change from 0 to 207008 Sep 9 05:10:55.438335 kernel: loop6: detected capacity change from 0 to 61264 Sep 9 05:10:55.441566 (udev-worker)[1642]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:10:55.466499 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:10:55.478492 kernel: loop7: detected capacity change from 0 to 100632 Sep 9 05:10:55.507464 (sd-merge)[1646]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 9 05:10:55.518249 (sd-merge)[1646]: Merged extensions into '/usr'. Sep 9 05:10:55.530732 systemd[1]: Reload requested from client PID 1590 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:10:55.530932 systemd[1]: Reloading... Sep 9 05:10:55.737197 ldconfig[1585]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:10:55.822226 zram_generator::config[1690]: No configuration found. Sep 9 05:10:55.986082 systemd-networkd[1620]: lo: Link UP Sep 9 05:10:55.986108 systemd-networkd[1620]: lo: Gained carrier Sep 9 05:10:55.990014 systemd-networkd[1620]: Enumeration completed Sep 9 05:10:55.991030 systemd-networkd[1620]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:10:55.991038 systemd-networkd[1620]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:10:56.016963 systemd-networkd[1620]: eth0: Link UP Sep 9 05:10:56.017831 systemd-networkd[1620]: eth0: Gained carrier Sep 9 05:10:56.017884 systemd-networkd[1620]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:10:56.040313 systemd-networkd[1620]: eth0: DHCPv4 address 172.31.30.120/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 05:10:56.507527 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 05:10:56.511996 systemd[1]: Reloading finished in 980 ms. Sep 9 05:10:56.550725 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:10:56.553986 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:10:56.559222 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:10:56.562567 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:10:56.642303 systemd[1]: Starting ensure-sysext.service... Sep 9 05:10:56.649611 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:10:56.661643 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:10:56.668409 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:10:56.671146 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:10:56.687637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:10:56.724055 systemd[1]: Reload requested from client PID 1836 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:10:56.724244 systemd[1]: Reloading... Sep 9 05:10:56.747443 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:10:56.747538 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:10:56.748378 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:10:56.748870 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:10:56.754350 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:10:56.754958 systemd-tmpfiles[1840]: ACLs are not supported, ignoring. Sep 9 05:10:56.755088 systemd-tmpfiles[1840]: ACLs are not supported, ignoring. Sep 9 05:10:56.771220 systemd-tmpfiles[1840]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:10:56.771247 systemd-tmpfiles[1840]: Skipping /boot Sep 9 05:10:56.792357 systemd-tmpfiles[1840]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:10:56.792383 systemd-tmpfiles[1840]: Skipping /boot Sep 9 05:10:56.906217 zram_generator::config[1881]: No configuration found. Sep 9 05:10:57.327918 systemd[1]: Reloading finished in 603 ms. Sep 9 05:10:57.382647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:10:57.834361 systemd-networkd[1620]: eth0: Gained IPv6LL Sep 9 05:10:57.899363 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:10:57.903153 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:10:57.907404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:10:57.911267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:10:57.930357 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:10:57.937767 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:10:57.945908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:10:57.962419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:10:57.968874 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:10:57.979902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:10:57.985603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:10:57.994739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:10:58.003695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:10:58.006811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:10:58.007037 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:10:58.012870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:10:58.013911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:10:58.014330 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:10:58.023630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:10:58.031734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:10:58.034846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:10:58.035069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:10:58.035389 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:10:58.054308 systemd[1]: Finished ensure-sysext.service. Sep 9 05:10:58.058700 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:10:58.069859 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:10:58.098422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:10:58.100353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:10:58.104512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:10:58.111765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:10:58.120106 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:10:58.121399 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:10:58.135075 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:10:58.137305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:10:58.141660 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:10:58.142088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:10:58.152979 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:10:58.166045 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:10:58.209799 augenrules[1973]: No rules Sep 9 05:10:58.214320 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:10:58.214761 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:10:58.218538 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:10:58.222842 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:10:58.274281 systemd-resolved[1938]: Positive Trust Anchors: Sep 9 05:10:58.274311 systemd-resolved[1938]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:10:58.274372 systemd-resolved[1938]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:10:58.288074 systemd-resolved[1938]: Defaulting to hostname 'linux'. Sep 9 05:10:58.290964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:10:58.293831 systemd[1]: Reached target network.target - Network. Sep 9 05:10:58.295841 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:10:58.298713 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:10:58.301620 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:10:58.305040 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:10:58.307967 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:10:58.311705 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:10:58.314774 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:10:58.317804 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:10:58.320709 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:10:58.320753 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:10:58.323197 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:10:58.326846 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:10:58.332223 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:10:58.340221 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:10:58.343613 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:10:58.347076 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:10:58.353307 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:10:58.356646 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:10:58.360984 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:10:58.363983 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:10:58.366375 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:10:58.368654 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:10:58.368820 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:10:58.370675 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:10:58.378467 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 05:10:58.387465 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:10:58.397485 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:10:58.402795 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:10:58.410518 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:10:58.413707 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:10:58.423489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:10:58.429918 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:10:58.436550 systemd[1]: Started ntpd.service - Network Time Service. Sep 9 05:10:58.453312 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:10:58.479058 jq[1986]: false Sep 9 05:10:58.469518 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:10:58.480478 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 9 05:10:58.490584 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:10:58.496519 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:10:58.504700 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:10:58.509581 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:10:58.511509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:10:58.518055 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:10:58.531061 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:10:58.552714 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:10:58.556514 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:10:58.556950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:10:58.627422 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:10:58.629297 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:10:58.648984 extend-filesystems[1987]: Found /dev/nvme0n1p6 Sep 9 05:10:58.682944 jq[2001]: true Sep 9 05:10:58.666842 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:10:58.683961 dbus-daemon[1984]: [system] SELinux support is enabled Sep 9 05:10:58.684260 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:10:58.694821 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:10:58.695452 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:10:58.701945 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:10:58.702058 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:10:58.707300 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:10:58.707350 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:10:58.716485 extend-filesystems[1987]: Found /dev/nvme0n1p9 Sep 9 05:10:58.729952 dbus-daemon[1984]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1620 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 05:10:58.740028 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Sep 9 05:10:58.747275 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 05:10:58.756586 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 05:10:58.774888 tar[2024]: linux-arm64/LICENSE Sep 9 05:10:58.783208 tar[2024]: linux-arm64/helm Sep 9 05:10:58.790642 jq[2030]: true Sep 9 05:10:58.795678 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:10:58.828026 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:10:00 UTC 2025 (1): Starting Sep 9 05:10:58.828089 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:10:58.828634 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:10:00 UTC 2025 (1): Starting Sep 9 05:10:58.828634 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:10:58.828634 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: ---------------------------------------------------- Sep 9 05:10:58.828634 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:10:58.828634 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:10:58.828634 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: corporation. Support and training for ntp-4 are Sep 9 05:10:58.828108 ntpd[1990]: ---------------------------------------------------- Sep 9 05:10:58.828125 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:10:58.832663 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: available at https://www.nwtime.org/support Sep 9 05:10:58.832663 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: ---------------------------------------------------- Sep 9 05:10:58.828141 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:10:58.828158 ntpd[1990]: corporation. Support and training for ntp-4 are Sep 9 05:10:58.832511 ntpd[1990]: available at https://www.nwtime.org/support Sep 9 05:10:58.832554 ntpd[1990]: ---------------------------------------------------- Sep 9 05:10:58.841813 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: proto: precision = 0.096 usec (-23) Sep 9 05:10:58.841424 ntpd[1990]: proto: precision = 0.096 usec (-23) Sep 9 05:10:58.846809 ntpd[1990]: basedate set to 2025-08-28 Sep 9 05:10:58.847377 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: basedate set to 2025-08-28 Sep 9 05:10:58.847377 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: gps base set to 2025-08-31 (week 2382) Sep 9 05:10:58.846845 ntpd[1990]: gps base set to 2025-08-31 (week 2382) Sep 9 05:10:58.855254 update_engine[1999]: I20250909 05:10:58.852747 1999 main.cc:92] Flatcar Update Engine starting Sep 9 05:10:58.864350 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:10:58.864450 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:10:58.864596 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:10:58.864596 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:10:58.864704 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:10:58.864781 ntpd[1990]: Listen normally on 3 eth0 172.31.30.120:123 Sep 9 05:10:58.864841 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:10:58.864841 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Listen normally on 3 eth0 172.31.30.120:123 Sep 9 05:10:58.864920 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Listen normally on 4 lo [::1]:123 Sep 9 05:10:58.864920 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Listen normally on 5 eth0 [fe80::4f3:69ff:fe82:bd99%2]:123 Sep 9 05:10:58.864844 ntpd[1990]: Listen normally on 4 lo [::1]:123 Sep 9 05:10:58.865082 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: Listening on routing socket on fd #22 for interface updates Sep 9 05:10:58.864910 ntpd[1990]: Listen normally on 5 eth0 [fe80::4f3:69ff:fe82:bd99%2]:123 Sep 9 05:10:58.864965 ntpd[1990]: Listening on routing socket on fd #22 for interface updates Sep 9 05:10:58.882956 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Sep 9 05:10:58.895960 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:10:58.906252 update_engine[1999]: I20250909 05:10:58.904552 1999 update_check_scheduler.cc:74] Next update check in 4m50s Sep 9 05:10:58.909576 extend-filesystems[2053]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:10:58.954224 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 9 05:10:58.959055 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:10:58.959125 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:10:58.959326 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:10:58.959326 ntpd[1990]: 9 Sep 05:10:58 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:10:58.959753 coreos-metadata[1983]: Sep 09 05:10:58.959 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 05:10:58.960427 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:10:58.966980 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 9 05:10:58.980191 coreos-metadata[1983]: Sep 09 05:10:58.980 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 9 05:10:58.982821 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 9 05:10:58.994220 coreos-metadata[1983]: Sep 09 05:10:58.992 INFO Fetch successful Sep 9 05:10:58.994220 coreos-metadata[1983]: Sep 09 05:10:58.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 9 05:10:58.995305 coreos-metadata[1983]: Sep 09 05:10:58.994 INFO Fetch successful Sep 9 05:10:58.995305 coreos-metadata[1983]: Sep 09 05:10:58.994 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 9 05:10:58.996000 coreos-metadata[1983]: Sep 09 05:10:58.995 INFO Fetch successful Sep 9 05:10:58.996282 coreos-metadata[1983]: Sep 09 05:10:58.996 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 9 05:10:58.997354 coreos-metadata[1983]: Sep 09 05:10:58.997 INFO Fetch successful Sep 9 05:10:58.997354 coreos-metadata[1983]: Sep 09 05:10:58.997 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 9 05:10:59.002796 coreos-metadata[1983]: Sep 09 05:10:59.002 INFO Fetch failed with 404: resource not found Sep 9 05:10:59.002796 coreos-metadata[1983]: Sep 09 05:10:59.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 9 05:10:59.009168 coreos-metadata[1983]: Sep 09 05:10:59.009 INFO Fetch successful Sep 9 05:10:59.009168 coreos-metadata[1983]: Sep 09 05:10:59.009 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 9 05:10:59.016285 coreos-metadata[1983]: Sep 09 05:10:59.016 INFO Fetch successful Sep 9 05:10:59.016407 coreos-metadata[1983]: Sep 09 05:10:59.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 9 05:10:59.021353 coreos-metadata[1983]: Sep 09 05:10:59.021 INFO Fetch successful Sep 9 05:10:59.021353 coreos-metadata[1983]: Sep 09 05:10:59.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 9 05:10:59.022523 coreos-metadata[1983]: Sep 09 05:10:59.022 INFO Fetch successful Sep 9 05:10:59.022604 coreos-metadata[1983]: Sep 09 05:10:59.022 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 9 05:10:59.028767 coreos-metadata[1983]: Sep 09 05:10:59.028 INFO Fetch successful Sep 9 05:10:59.086231 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 9 05:10:59.111597 extend-filesystems[2053]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 9 05:10:59.111597 extend-filesystems[2053]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:10:59.111597 extend-filesystems[2053]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 9 05:10:59.147922 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Sep 9 05:10:59.119751 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:10:59.121288 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:10:59.175792 bash[2072]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:10:59.181265 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:10:59.196564 systemd[1]: Starting sshkeys.service... Sep 9 05:10:59.283257 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 05:10:59.290850 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:10:59.310484 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 05:10:59.319647 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 05:10:59.355211 systemd-logind[1998]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 05:10:59.355253 systemd-logind[1998]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 9 05:10:59.362540 systemd-logind[1998]: New seat seat0. Sep 9 05:10:59.382823 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:10:59.460752 locksmithd[2054]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:10:59.609699 coreos-metadata[2087]: Sep 09 05:10:59.604 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 05:10:59.609699 coreos-metadata[2087]: Sep 09 05:10:59.607 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 9 05:10:59.611075 coreos-metadata[2087]: Sep 09 05:10:59.610 INFO Fetch successful Sep 9 05:10:59.612721 coreos-metadata[2087]: Sep 09 05:10:59.611 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 05:10:59.618407 coreos-metadata[2087]: Sep 09 05:10:59.617 INFO Fetch successful Sep 9 05:10:59.624378 unknown[2087]: wrote ssh authorized keys file for user: core Sep 9 05:10:59.633503 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 05:10:59.638004 amazon-ssm-agent[2057]: Initializing new seelog logger Sep 9 05:10:59.639245 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 05:10:59.642354 amazon-ssm-agent[2057]: New Seelog Logger Creation Complete Sep 9 05:10:59.642354 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.642354 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.650204 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 processing appconfig overrides Sep 9 05:10:59.648487 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2040 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 05:10:59.670217 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.670217 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.670217 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 processing appconfig overrides Sep 9 05:10:59.670217 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.670217 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.670217 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 processing appconfig overrides Sep 9 05:10:59.670891 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 05:10:59.674012 amazon-ssm-agent[2057]: 2025-09-09 05:10:59.6609 INFO Proxy environment variables: Sep 9 05:10:59.694805 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.694805 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:10:59.694805 amazon-ssm-agent[2057]: 2025/09/09 05:10:59 processing appconfig overrides Sep 9 05:10:59.753843 update-ssh-keys[2114]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:10:59.758448 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 05:10:59.767526 systemd[1]: Finished sshkeys.service. Sep 9 05:10:59.778895 amazon-ssm-agent[2057]: 2025-09-09 05:10:59.6610 INFO https_proxy: Sep 9 05:10:59.881219 amazon-ssm-agent[2057]: 2025-09-09 05:10:59.6610 INFO http_proxy: Sep 9 05:10:59.932030 containerd[2018]: time="2025-09-09T05:10:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:10:59.940210 containerd[2018]: time="2025-09-09T05:10:59.936778047Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:10:59.982341 amazon-ssm-agent[2057]: 2025-09-09 05:10:59.6610 INFO no_proxy: Sep 9 05:11:00.030290 containerd[2018]: time="2025-09-09T05:11:00.030165936Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.92µs" Sep 9 05:11:00.030290 containerd[2018]: time="2025-09-09T05:11:00.030272484Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:11:00.030479 containerd[2018]: time="2025-09-09T05:11:00.030316608Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:11:00.032752 containerd[2018]: time="2025-09-09T05:11:00.030606612Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:11:00.032752 containerd[2018]: time="2025-09-09T05:11:00.030651384Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:11:00.032752 containerd[2018]: time="2025-09-09T05:11:00.030709896Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:11:00.032752 containerd[2018]: time="2025-09-09T05:11:00.030821472Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:11:00.032752 containerd[2018]: time="2025-09-09T05:11:00.030849696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:11:00.038101 containerd[2018]: time="2025-09-09T05:11:00.037335096Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:11:00.038101 containerd[2018]: time="2025-09-09T05:11:00.037400640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:11:00.038101 containerd[2018]: time="2025-09-09T05:11:00.037436472Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:11:00.038101 containerd[2018]: time="2025-09-09T05:11:00.037459956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:11:00.038101 containerd[2018]: time="2025-09-09T05:11:00.037677732Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:11:00.038101 containerd[2018]: time="2025-09-09T05:11:00.038075004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:11:00.040530 containerd[2018]: time="2025-09-09T05:11:00.038140116Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:11:00.047118 containerd[2018]: time="2025-09-09T05:11:00.038165712Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:11:00.047118 containerd[2018]: time="2025-09-09T05:11:00.045421776Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:11:00.047118 containerd[2018]: time="2025-09-09T05:11:00.045842892Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:11:00.047118 containerd[2018]: time="2025-09-09T05:11:00.045994452Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.057943572Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058078848Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058116804Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058148364Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058259604Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058318008Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058351428Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058403328Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058437444Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058465068Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058515960Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058547652Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058875708Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:11:00.059415 containerd[2018]: time="2025-09-09T05:11:00.058952712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:11:00.061263 containerd[2018]: time="2025-09-09T05:11:00.058992396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:11:00.061353 containerd[2018]: time="2025-09-09T05:11:00.061297332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:11:00.061400 containerd[2018]: time="2025-09-09T05:11:00.061346604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:11:00.061471 containerd[2018]: time="2025-09-09T05:11:00.061400868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:11:00.061471 containerd[2018]: time="2025-09-09T05:11:00.061430496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:11:00.061565 containerd[2018]: time="2025-09-09T05:11:00.061483272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:11:00.061565 containerd[2018]: time="2025-09-09T05:11:00.061514208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:11:00.061656 containerd[2018]: time="2025-09-09T05:11:00.061565604Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:11:00.061656 containerd[2018]: time="2025-09-09T05:11:00.061596624Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:11:00.064652 containerd[2018]: time="2025-09-09T05:11:00.063678708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:11:00.064652 containerd[2018]: time="2025-09-09T05:11:00.063766716Z" level=info msg="Start snapshots syncer" Sep 9 05:11:00.064652 containerd[2018]: time="2025-09-09T05:11:00.063850824Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:11:00.070635 containerd[2018]: time="2025-09-09T05:11:00.068514552Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:11:00.070635 containerd[2018]: time="2025-09-09T05:11:00.070223544Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:11:00.070935 containerd[2018]: time="2025-09-09T05:11:00.070504356Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.073602648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.073769424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.073872396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.073947804Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.073980816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.074034540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.074062872Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.074155560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.074217672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.074250720Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.076939920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.077027736Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:11:00.077422 containerd[2018]: time="2025-09-09T05:11:00.077092632Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:11:00.078040 containerd[2018]: time="2025-09-09T05:11:00.077134956Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:11:00.078040 containerd[2018]: time="2025-09-09T05:11:00.077158248Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:11:00.078040 containerd[2018]: time="2025-09-09T05:11:00.077235588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:11:00.078040 containerd[2018]: time="2025-09-09T05:11:00.077266416Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:11:00.081360 containerd[2018]: time="2025-09-09T05:11:00.078220824Z" level=info msg="runtime interface created" Sep 9 05:11:00.081360 containerd[2018]: time="2025-09-09T05:11:00.078260256Z" level=info msg="created NRI interface" Sep 9 05:11:00.081360 containerd[2018]: time="2025-09-09T05:11:00.078287016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:11:00.081360 containerd[2018]: time="2025-09-09T05:11:00.078321960Z" level=info msg="Connect containerd service" Sep 9 05:11:00.081360 containerd[2018]: time="2025-09-09T05:11:00.078410064Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:11:00.083200 amazon-ssm-agent[2057]: 2025-09-09 05:10:59.6635 INFO Checking if agent identity type OnPrem can be assumed Sep 9 05:11:00.089205 containerd[2018]: time="2025-09-09T05:11:00.087472848Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:11:00.185207 amazon-ssm-agent[2057]: 2025-09-09 05:10:59.6636 INFO Checking if agent identity type EC2 can be assumed Sep 9 05:11:00.282390 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0700 INFO Agent will take identity from EC2 Sep 9 05:11:00.346849 polkitd[2116]: Started polkitd version 126 Sep 9 05:11:00.383832 polkitd[2116]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 05:11:00.385377 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0824 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 9 05:11:00.388119 polkitd[2116]: Loading rules from directory /run/polkit-1/rules.d Sep 9 05:11:00.392477 polkitd[2116]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:11:00.394223 polkitd[2116]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 05:11:00.396487 polkitd[2116]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:11:00.396711 polkitd[2116]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 05:11:00.401628 polkitd[2116]: Finished loading, compiling and executing 2 rules Sep 9 05:11:00.405691 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 05:11:00.408150 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 05:11:00.409957 polkitd[2116]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 05:11:00.484647 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0824 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 9 05:11:00.548469 systemd-hostnamed[2040]: Hostname set to (transient) Sep 9 05:11:00.549121 systemd-resolved[1938]: System hostname changed to 'ip-172-31-30-120'. Sep 9 05:11:00.586356 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0824 INFO [amazon-ssm-agent] Starting Core Agent Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.638852775Z" level=info msg="Start subscribing containerd event" Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.638982927Z" level=info msg="Start recovering state" Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639228639Z" level=info msg="Start event monitor" Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639255171Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639300711Z" level=info msg="Start streaming server" Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639334155Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639375567Z" level=info msg="runtime interface starting up..." Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639405135Z" level=info msg="starting plugins..." Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639459111Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639379119Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:11:00.639635 containerd[2018]: time="2025-09-09T05:11:00.639638355Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:11:00.640827 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:11:00.642135 containerd[2018]: time="2025-09-09T05:11:00.640911927Z" level=info msg="containerd successfully booted in 0.713984s" Sep 9 05:11:00.688197 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0824 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 9 05:11:00.790198 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0825 INFO [Registrar] Starting registrar module Sep 9 05:11:00.889836 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0922 INFO [EC2Identity] Checking disk for registration info Sep 9 05:11:00.958002 tar[2024]: linux-arm64/README.md Sep 9 05:11:00.991248 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0923 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 9 05:11:00.995117 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:11:01.091697 amazon-ssm-agent[2057]: 2025-09-09 05:11:00.0923 INFO [EC2Identity] Generating registration keypair Sep 9 05:11:01.367167 sshd_keygen[2032]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:11:01.419834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:11:01.427666 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:11:01.456726 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:11:01.457224 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:11:01.469637 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:11:01.513057 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:11:01.525221 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:11:01.533706 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:11:01.538172 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:11:01.548314 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.5470 INFO [EC2Identity] Checking write access before registering Sep 9 05:11:01.606204 amazon-ssm-agent[2057]: 2025/09/09 05:11:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:11:01.606346 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:11:01.608107 amazon-ssm-agent[2057]: 2025/09/09 05:11:01 processing appconfig overrides Sep 9 05:11:01.636005 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.5477 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 9 05:11:01.636273 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.6057 INFO [EC2Identity] EC2 registration was successful. Sep 9 05:11:01.636368 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.6058 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 9 05:11:01.636521 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.6059 INFO [CredentialRefresher] credentialRefresher has started Sep 9 05:11:01.636521 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.6060 INFO [CredentialRefresher] Starting credentials refresher loop Sep 9 05:11:01.636521 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.6356 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 9 05:11:01.636521 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.6359 INFO [CredentialRefresher] Credentials ready Sep 9 05:11:01.648803 amazon-ssm-agent[2057]: 2025-09-09 05:11:01.6367 INFO [CredentialRefresher] Next credential rotation will be in 29.9999818021 minutes Sep 9 05:11:02.662022 amazon-ssm-agent[2057]: 2025-09-09 05:11:02.6618 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 9 05:11:02.762970 amazon-ssm-agent[2057]: 2025-09-09 05:11:02.6652 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started Sep 9 05:11:02.863384 amazon-ssm-agent[2057]: 2025-09-09 05:11:02.6653 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 9 05:11:04.034871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:04.039334 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:11:04.044338 systemd[1]: Startup finished in 3.654s (kernel) + 10.091s (initrd) + 12.334s (userspace) = 26.081s. Sep 9 05:11:04.051313 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:11:06.260569 systemd-resolved[1938]: Clock change detected. Flushing caches. Sep 9 05:11:06.513859 kubelet[2271]: E0909 05:11:06.513701 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:11:06.518263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:11:06.518573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:11:06.519223 systemd[1]: kubelet.service: Consumed 1.359s CPU time, 255.5M memory peak. Sep 9 05:11:07.366327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:11:07.368825 systemd[1]: Started sshd@0-172.31.30.120:22-147.75.109.163:59962.service - OpenSSH per-connection server daemon (147.75.109.163:59962). Sep 9 05:11:07.842816 sshd[2283]: Accepted publickey for core from 147.75.109.163 port 59962 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:11:07.847260 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:07.860682 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:11:07.862726 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:11:07.881712 systemd-logind[1998]: New session 1 of user core. Sep 9 05:11:07.901800 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:11:07.906665 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:11:07.927271 (systemd)[2288]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:11:07.931814 systemd-logind[1998]: New session c1 of user core. Sep 9 05:11:08.222039 systemd[2288]: Queued start job for default target default.target. Sep 9 05:11:08.230119 systemd[2288]: Created slice app.slice - User Application Slice. Sep 9 05:11:08.230369 systemd[2288]: Reached target paths.target - Paths. Sep 9 05:11:08.230579 systemd[2288]: Reached target timers.target - Timers. Sep 9 05:11:08.233061 systemd[2288]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:11:08.262647 systemd[2288]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:11:08.262865 systemd[2288]: Reached target sockets.target - Sockets. Sep 9 05:11:08.262943 systemd[2288]: Reached target basic.target - Basic System. Sep 9 05:11:08.263207 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:11:08.265081 systemd[2288]: Reached target default.target - Main User Target. Sep 9 05:11:08.265202 systemd[2288]: Startup finished in 321ms. Sep 9 05:11:08.275312 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:11:08.428597 systemd[1]: Started sshd@1-172.31.30.120:22-147.75.109.163:59970.service - OpenSSH per-connection server daemon (147.75.109.163:59970). Sep 9 05:11:08.620908 sshd[2299]: Accepted publickey for core from 147.75.109.163 port 59970 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:11:08.623366 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:08.631224 systemd-logind[1998]: New session 2 of user core. Sep 9 05:11:08.651280 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:11:08.777327 sshd[2302]: Connection closed by 147.75.109.163 port 59970 Sep 9 05:11:08.778135 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:08.784715 systemd[1]: sshd@1-172.31.30.120:22-147.75.109.163:59970.service: Deactivated successfully. Sep 9 05:11:08.787759 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:11:08.789501 systemd-logind[1998]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:11:08.792351 systemd-logind[1998]: Removed session 2. Sep 9 05:11:08.817500 systemd[1]: Started sshd@2-172.31.30.120:22-147.75.109.163:59978.service - OpenSSH per-connection server daemon (147.75.109.163:59978). Sep 9 05:11:09.015250 sshd[2308]: Accepted publickey for core from 147.75.109.163 port 59978 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:11:09.017585 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:09.026885 systemd-logind[1998]: New session 3 of user core. Sep 9 05:11:09.036293 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:11:09.154042 sshd[2311]: Connection closed by 147.75.109.163 port 59978 Sep 9 05:11:09.154827 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:09.161681 systemd[1]: sshd@2-172.31.30.120:22-147.75.109.163:59978.service: Deactivated successfully. Sep 9 05:11:09.161682 systemd-logind[1998]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:11:09.165369 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:11:09.167791 systemd-logind[1998]: Removed session 3. Sep 9 05:11:09.187978 systemd[1]: Started sshd@3-172.31.30.120:22-147.75.109.163:59990.service - OpenSSH per-connection server daemon (147.75.109.163:59990). Sep 9 05:11:09.378405 sshd[2317]: Accepted publickey for core from 147.75.109.163 port 59990 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:11:09.381087 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:09.390106 systemd-logind[1998]: New session 4 of user core. Sep 9 05:11:09.396283 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:11:09.520099 sshd[2320]: Connection closed by 147.75.109.163 port 59990 Sep 9 05:11:09.520894 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:09.527518 systemd[1]: sshd@3-172.31.30.120:22-147.75.109.163:59990.service: Deactivated successfully. Sep 9 05:11:09.530994 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:11:09.535511 systemd-logind[1998]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:11:09.538717 systemd-logind[1998]: Removed session 4. Sep 9 05:11:09.556421 systemd[1]: Started sshd@4-172.31.30.120:22-147.75.109.163:60006.service - OpenSSH per-connection server daemon (147.75.109.163:60006). Sep 9 05:11:09.765313 sshd[2326]: Accepted publickey for core from 147.75.109.163 port 60006 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:11:09.767762 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:09.778125 systemd-logind[1998]: New session 5 of user core. Sep 9 05:11:09.784322 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:11:10.064760 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:11:10.065430 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:10.080923 sudo[2330]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:10.104653 sshd[2329]: Connection closed by 147.75.109.163 port 60006 Sep 9 05:11:10.105697 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:10.113737 systemd-logind[1998]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:11:10.114913 systemd[1]: sshd@4-172.31.30.120:22-147.75.109.163:60006.service: Deactivated successfully. Sep 9 05:11:10.119614 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:11:10.124480 systemd-logind[1998]: Removed session 5. Sep 9 05:11:10.138906 systemd[1]: Started sshd@5-172.31.30.120:22-147.75.109.163:56320.service - OpenSSH per-connection server daemon (147.75.109.163:56320). Sep 9 05:11:10.340507 sshd[2336]: Accepted publickey for core from 147.75.109.163 port 56320 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:11:10.343164 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:10.352122 systemd-logind[1998]: New session 6 of user core. Sep 9 05:11:10.361264 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:11:10.465428 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:11:10.466002 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:10.475431 sudo[2341]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:10.484891 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:11:10.485579 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:10.501990 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:11:10.628685 augenrules[2363]: No rules Sep 9 05:11:10.631441 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:11:10.631979 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:11:10.634596 sudo[2340]: pam_unix(sudo:session): session closed for user root Sep 9 05:11:10.658083 sshd[2339]: Connection closed by 147.75.109.163 port 56320 Sep 9 05:11:10.658822 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Sep 9 05:11:10.665469 systemd-logind[1998]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:11:10.665965 systemd[1]: sshd@5-172.31.30.120:22-147.75.109.163:56320.service: Deactivated successfully. Sep 9 05:11:10.668912 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:11:10.672933 systemd-logind[1998]: Removed session 6. Sep 9 05:11:10.692965 systemd[1]: Started sshd@6-172.31.30.120:22-147.75.109.163:56328.service - OpenSSH per-connection server daemon (147.75.109.163:56328). Sep 9 05:11:10.881863 sshd[2372]: Accepted publickey for core from 147.75.109.163 port 56328 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:11:10.884798 sshd-session[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:11:10.895168 systemd-logind[1998]: New session 7 of user core. Sep 9 05:11:10.898364 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:11:11.001849 sudo[2376]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:11:11.002553 sudo[2376]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:11:12.235912 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:11:12.262789 (dockerd)[2394]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:11:12.997086 dockerd[2394]: time="2025-09-09T05:11:12.996864585Z" level=info msg="Starting up" Sep 9 05:11:12.998235 dockerd[2394]: time="2025-09-09T05:11:12.998186637Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:11:13.018803 dockerd[2394]: time="2025-09-09T05:11:13.018714377Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:11:13.109819 dockerd[2394]: time="2025-09-09T05:11:13.109555338Z" level=info msg="Loading containers: start." Sep 9 05:11:13.142061 kernel: Initializing XFRM netlink socket Sep 9 05:11:13.583062 (udev-worker)[2417]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:11:13.658125 systemd-networkd[1620]: docker0: Link UP Sep 9 05:11:13.668321 dockerd[2394]: time="2025-09-09T05:11:13.668167412Z" level=info msg="Loading containers: done." Sep 9 05:11:13.693482 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2639627708-merged.mount: Deactivated successfully. Sep 9 05:11:13.698525 dockerd[2394]: time="2025-09-09T05:11:13.697964852Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:11:13.698525 dockerd[2394]: time="2025-09-09T05:11:13.698118656Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:11:13.698525 dockerd[2394]: time="2025-09-09T05:11:13.698263028Z" level=info msg="Initializing buildkit" Sep 9 05:11:13.749787 dockerd[2394]: time="2025-09-09T05:11:13.749739693Z" level=info msg="Completed buildkit initialization" Sep 9 05:11:13.765070 dockerd[2394]: time="2025-09-09T05:11:13.764964213Z" level=info msg="Daemon has completed initialization" Sep 9 05:11:13.765333 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:11:13.766483 dockerd[2394]: time="2025-09-09T05:11:13.766166121Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:11:15.889062 containerd[2018]: time="2025-09-09T05:11:15.888985775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 05:11:16.525321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120873718.mount: Deactivated successfully. Sep 9 05:11:16.528001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:11:16.531098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:16.920818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:16.934947 (kubelet)[2629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:11:17.041854 kubelet[2629]: E0909 05:11:17.041767 2629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:11:17.049727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:11:17.050809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:11:17.053176 systemd[1]: kubelet.service: Consumed 323ms CPU time, 107.3M memory peak. Sep 9 05:11:18.594742 containerd[2018]: time="2025-09-09T05:11:18.594648133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:18.597520 containerd[2018]: time="2025-09-09T05:11:18.597427969Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 9 05:11:18.600109 containerd[2018]: time="2025-09-09T05:11:18.600010813Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:18.605648 containerd[2018]: time="2025-09-09T05:11:18.605547673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:18.608501 containerd[2018]: time="2025-09-09T05:11:18.607345837Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.718175334s" Sep 9 05:11:18.608501 containerd[2018]: time="2025-09-09T05:11:18.607407241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 05:11:18.609121 containerd[2018]: time="2025-09-09T05:11:18.609008725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 05:11:20.272636 containerd[2018]: time="2025-09-09T05:11:20.272555125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:20.274571 containerd[2018]: time="2025-09-09T05:11:20.274338925Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 9 05:11:20.275641 containerd[2018]: time="2025-09-09T05:11:20.275589373Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:20.280260 containerd[2018]: time="2025-09-09T05:11:20.280196545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:20.282378 containerd[2018]: time="2025-09-09T05:11:20.282326329Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.673212292s" Sep 9 05:11:20.282690 containerd[2018]: time="2025-09-09T05:11:20.282518761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 05:11:20.283407 containerd[2018]: time="2025-09-09T05:11:20.283258309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 05:11:21.947468 containerd[2018]: time="2025-09-09T05:11:21.946059293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:21.949095 containerd[2018]: time="2025-09-09T05:11:21.949037069Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 9 05:11:21.950987 containerd[2018]: time="2025-09-09T05:11:21.950931377Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:21.956934 containerd[2018]: time="2025-09-09T05:11:21.956868425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:21.958933 containerd[2018]: time="2025-09-09T05:11:21.958888241Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.675330628s" Sep 9 05:11:21.959126 containerd[2018]: time="2025-09-09T05:11:21.959096993Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 05:11:21.960288 containerd[2018]: time="2025-09-09T05:11:21.960238409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 05:11:23.300608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690071610.mount: Deactivated successfully. Sep 9 05:11:23.847362 containerd[2018]: time="2025-09-09T05:11:23.847302451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:23.849200 containerd[2018]: time="2025-09-09T05:11:23.849160171Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 9 05:11:23.849797 containerd[2018]: time="2025-09-09T05:11:23.849759511Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:23.852801 containerd[2018]: time="2025-09-09T05:11:23.852736927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:23.854227 containerd[2018]: time="2025-09-09T05:11:23.854185363Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.893891514s" Sep 9 05:11:23.854360 containerd[2018]: time="2025-09-09T05:11:23.854333071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 05:11:23.855048 containerd[2018]: time="2025-09-09T05:11:23.854929603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:11:24.350447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367826463.mount: Deactivated successfully. Sep 9 05:11:25.640903 containerd[2018]: time="2025-09-09T05:11:25.640808528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:25.647988 containerd[2018]: time="2025-09-09T05:11:25.647919320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 9 05:11:25.652167 containerd[2018]: time="2025-09-09T05:11:25.652111700Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:25.663332 containerd[2018]: time="2025-09-09T05:11:25.663258644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:25.665683 containerd[2018]: time="2025-09-09T05:11:25.665630780Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.810303125s" Sep 9 05:11:25.665878 containerd[2018]: time="2025-09-09T05:11:25.665847896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 05:11:25.666827 containerd[2018]: time="2025-09-09T05:11:25.666454712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:11:26.144716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408656696.mount: Deactivated successfully. Sep 9 05:11:26.158169 containerd[2018]: time="2025-09-09T05:11:26.158093370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:11:26.161741 containerd[2018]: time="2025-09-09T05:11:26.161676954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 05:11:26.163888 containerd[2018]: time="2025-09-09T05:11:26.163827006Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:11:26.169731 containerd[2018]: time="2025-09-09T05:11:26.169647570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:11:26.171807 containerd[2018]: time="2025-09-09T05:11:26.171724014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 505.220942ms" Sep 9 05:11:26.172192 containerd[2018]: time="2025-09-09T05:11:26.171973626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 05:11:26.173274 containerd[2018]: time="2025-09-09T05:11:26.173214390Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 05:11:26.758208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496547274.mount: Deactivated successfully. Sep 9 05:11:27.185817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:11:27.189718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:27.575375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:27.590826 (kubelet)[2805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:11:27.669510 kubelet[2805]: E0909 05:11:27.669424 2805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:11:27.673769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:11:27.674114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:11:27.675119 systemd[1]: kubelet.service: Consumed 306ms CPU time, 105.8M memory peak. Sep 9 05:11:29.596061 containerd[2018]: time="2025-09-09T05:11:29.595710923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:29.599167 containerd[2018]: time="2025-09-09T05:11:29.599095943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 9 05:11:29.601477 containerd[2018]: time="2025-09-09T05:11:29.601381055Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:29.609048 containerd[2018]: time="2025-09-09T05:11:29.607009691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:29.609248 containerd[2018]: time="2025-09-09T05:11:29.609207479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.435755633s" Sep 9 05:11:29.609383 containerd[2018]: time="2025-09-09T05:11:29.609352943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 05:11:31.011327 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 05:11:36.988628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:36.989141 systemd[1]: kubelet.service: Consumed 306ms CPU time, 105.8M memory peak. Sep 9 05:11:36.993311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:37.046527 systemd[1]: Reload requested from client PID 2848 ('systemctl') (unit session-7.scope)... Sep 9 05:11:37.046559 systemd[1]: Reloading... Sep 9 05:11:37.309079 zram_generator::config[2894]: No configuration found. Sep 9 05:11:37.762520 systemd[1]: Reloading finished in 715 ms. Sep 9 05:11:37.853683 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:11:37.853856 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:11:37.855149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:37.855233 systemd[1]: kubelet.service: Consumed 219ms CPU time, 95M memory peak. Sep 9 05:11:37.859450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:38.169601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:38.182553 (kubelet)[2956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:11:38.257893 kubelet[2956]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:38.257893 kubelet[2956]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:11:38.257893 kubelet[2956]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:38.258483 kubelet[2956]: I0909 05:11:38.258049 2956 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:11:39.835258 kubelet[2956]: I0909 05:11:39.835207 2956 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:11:39.837059 kubelet[2956]: I0909 05:11:39.835939 2956 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:11:39.837059 kubelet[2956]: I0909 05:11:39.836518 2956 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:11:39.893401 kubelet[2956]: E0909 05:11:39.893351 2956 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:11:39.896479 kubelet[2956]: I0909 05:11:39.896437 2956 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:11:39.910131 kubelet[2956]: I0909 05:11:39.909991 2956 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:11:39.917571 kubelet[2956]: I0909 05:11:39.917535 2956 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:11:39.919856 kubelet[2956]: I0909 05:11:39.919799 2956 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:11:39.920295 kubelet[2956]: I0909 05:11:39.919992 2956 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:11:39.920660 kubelet[2956]: I0909 05:11:39.920639 2956 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:11:39.920758 kubelet[2956]: I0909 05:11:39.920741 2956 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:11:39.921212 kubelet[2956]: I0909 05:11:39.921193 2956 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:39.927793 kubelet[2956]: I0909 05:11:39.927594 2956 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:11:39.927793 kubelet[2956]: I0909 05:11:39.927640 2956 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:11:39.927793 kubelet[2956]: I0909 05:11:39.927684 2956 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:11:39.927793 kubelet[2956]: I0909 05:11:39.927704 2956 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:11:39.936395 kubelet[2956]: W0909 05:11:39.935483 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 9 05:11:39.936395 kubelet[2956]: E0909 05:11:39.935602 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:11:39.937741 kubelet[2956]: W0909 05:11:39.937676 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 9 05:11:39.937942 kubelet[2956]: E0909 05:11:39.937914 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:11:39.938188 kubelet[2956]: I0909 05:11:39.938163 2956 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:11:39.939313 kubelet[2956]: I0909 05:11:39.939287 2956 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:11:39.939649 kubelet[2956]: W0909 05:11:39.939630 2956 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:11:39.945060 kubelet[2956]: I0909 05:11:39.944428 2956 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:11:39.945207 kubelet[2956]: I0909 05:11:39.945108 2956 server.go:1287] "Started kubelet" Sep 9 05:11:39.961448 kubelet[2956]: E0909 05:11:39.960366 2956 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.120:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-120.1863852b2457fef7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-120,UID:ip-172-31-30-120,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-120,},FirstTimestamp:2025-09-09 05:11:39.944460023 +0000 UTC m=+1.755999022,LastTimestamp:2025-09-09 05:11:39.944460023 +0000 UTC m=+1.755999022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-120,}" Sep 9 05:11:39.962217 kubelet[2956]: I0909 05:11:39.962173 2956 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:11:39.965442 kubelet[2956]: I0909 05:11:39.965390 2956 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:11:39.968533 kubelet[2956]: I0909 05:11:39.968500 2956 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:11:39.971348 kubelet[2956]: I0909 05:11:39.971299 2956 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:11:39.971954 kubelet[2956]: E0909 05:11:39.971897 2956 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-120\" not found" Sep 9 05:11:39.973331 kubelet[2956]: I0909 05:11:39.973269 2956 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:11:39.973504 kubelet[2956]: I0909 05:11:39.973392 2956 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:11:39.973703 kubelet[2956]: I0909 05:11:39.973300 2956 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:11:39.975383 kubelet[2956]: I0909 05:11:39.975121 2956 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:11:39.979738 kubelet[2956]: E0909 05:11:39.976444 2956 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": dial tcp 172.31.30.120:6443: connect: connection refused" interval="200ms" Sep 9 05:11:39.979738 kubelet[2956]: I0909 05:11:39.976886 2956 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:11:39.979738 kubelet[2956]: I0909 05:11:39.977044 2956 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:11:39.979738 kubelet[2956]: I0909 05:11:39.977687 2956 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:11:39.981240 kubelet[2956]: W0909 05:11:39.981156 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 9 05:11:39.981379 kubelet[2956]: E0909 05:11:39.981258 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:11:39.981743 kubelet[2956]: E0909 05:11:39.981693 2956 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:11:39.982771 kubelet[2956]: I0909 05:11:39.982724 2956 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:11:40.013592 kubelet[2956]: I0909 05:11:40.013547 2956 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:11:40.013592 kubelet[2956]: I0909 05:11:40.013580 2956 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:11:40.013776 kubelet[2956]: I0909 05:11:40.013610 2956 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:40.017912 kubelet[2956]: I0909 05:11:40.017860 2956 policy_none.go:49] "None policy: Start" Sep 9 05:11:40.017912 kubelet[2956]: I0909 05:11:40.017911 2956 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:11:40.018126 kubelet[2956]: I0909 05:11:40.017936 2956 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:11:40.019394 kubelet[2956]: I0909 05:11:40.019328 2956 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:11:40.024643 kubelet[2956]: I0909 05:11:40.024582 2956 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:11:40.024643 kubelet[2956]: I0909 05:11:40.024631 2956 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:11:40.025109 kubelet[2956]: I0909 05:11:40.024664 2956 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:11:40.025109 kubelet[2956]: I0909 05:11:40.024678 2956 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:11:40.025109 kubelet[2956]: E0909 05:11:40.024745 2956 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:11:40.026861 kubelet[2956]: W0909 05:11:40.026805 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 9 05:11:40.026861 kubelet[2956]: E0909 05:11:40.026875 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:11:40.035179 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:11:40.051604 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:11:40.060126 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:11:40.069052 kubelet[2956]: I0909 05:11:40.068891 2956 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:11:40.069400 kubelet[2956]: I0909 05:11:40.069379 2956 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:11:40.074334 kubelet[2956]: E0909 05:11:40.073013 2956 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-120\" not found" Sep 9 05:11:40.074334 kubelet[2956]: I0909 05:11:40.073989 2956 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:11:40.074553 kubelet[2956]: I0909 05:11:40.074378 2956 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:11:40.077170 kubelet[2956]: E0909 05:11:40.077071 2956 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:11:40.077170 kubelet[2956]: E0909 05:11:40.077141 2956 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-120\" not found" Sep 9 05:11:40.144713 systemd[1]: Created slice kubepods-burstable-pod75fa037c1679acf9a65749a95fce9c8f.slice - libcontainer container kubepods-burstable-pod75fa037c1679acf9a65749a95fce9c8f.slice. Sep 9 05:11:40.169139 kubelet[2956]: E0909 05:11:40.169048 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:40.177434 systemd[1]: Created slice kubepods-burstable-pode1d4dffe39202dbf299d050179cf0520.slice - libcontainer container kubepods-burstable-pode1d4dffe39202dbf299d050179cf0520.slice. Sep 9 05:11:40.178693 kubelet[2956]: E0909 05:11:40.177935 2956 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": dial tcp 172.31.30.120:6443: connect: connection refused" interval="400ms" Sep 9 05:11:40.180724 kubelet[2956]: I0909 05:11:40.180659 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-120" Sep 9 05:11:40.182438 kubelet[2956]: E0909 05:11:40.182368 2956 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.120:6443/api/v1/nodes\": dial tcp 172.31.30.120:6443: connect: connection refused" node="ip-172-31-30-120" Sep 9 05:11:40.186132 kubelet[2956]: E0909 05:11:40.186075 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:40.189429 systemd[1]: Created slice kubepods-burstable-pod6498e2f9f31f70a1fac01f264c0ee188.slice - libcontainer container kubepods-burstable-pod6498e2f9f31f70a1fac01f264c0ee188.slice. Sep 9 05:11:40.193588 kubelet[2956]: E0909 05:11:40.193535 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:40.274194 kubelet[2956]: I0909 05:11:40.274145 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75fa037c1679acf9a65749a95fce9c8f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"75fa037c1679acf9a65749a95fce9c8f\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:40.274359 kubelet[2956]: I0909 05:11:40.274213 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:40.274359 kubelet[2956]: I0909 05:11:40.274257 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6498e2f9f31f70a1fac01f264c0ee188-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-120\" (UID: \"6498e2f9f31f70a1fac01f264c0ee188\") " pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:40.274359 kubelet[2956]: I0909 05:11:40.274295 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:40.274359 kubelet[2956]: I0909 05:11:40.274330 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:40.274569 kubelet[2956]: I0909 05:11:40.274365 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:40.274569 kubelet[2956]: I0909 05:11:40.274404 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75fa037c1679acf9a65749a95fce9c8f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"75fa037c1679acf9a65749a95fce9c8f\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:40.274569 kubelet[2956]: I0909 05:11:40.274451 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75fa037c1679acf9a65749a95fce9c8f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"75fa037c1679acf9a65749a95fce9c8f\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:40.274569 kubelet[2956]: I0909 05:11:40.274488 2956 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:40.385076 kubelet[2956]: I0909 05:11:40.384858 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-120" Sep 9 05:11:40.385547 kubelet[2956]: E0909 05:11:40.385491 2956 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.120:6443/api/v1/nodes\": dial tcp 172.31.30.120:6443: connect: connection refused" node="ip-172-31-30-120" Sep 9 05:11:40.471322 containerd[2018]: time="2025-09-09T05:11:40.471248781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-120,Uid:75fa037c1679acf9a65749a95fce9c8f,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:40.487732 containerd[2018]: time="2025-09-09T05:11:40.487658338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-120,Uid:e1d4dffe39202dbf299d050179cf0520,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:40.497056 containerd[2018]: time="2025-09-09T05:11:40.496973602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-120,Uid:6498e2f9f31f70a1fac01f264c0ee188,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:40.527579 containerd[2018]: time="2025-09-09T05:11:40.527485342Z" level=info msg="connecting to shim 9704f616c460e94630286fe451dea052b4c4dc99cd42af783c170ab8c7a70727" address="unix:///run/containerd/s/939b29323597e65a13d49006fa67291f1e20a040621f5d8250ff1145bdceceab" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:40.579714 kubelet[2956]: E0909 05:11:40.579652 2956 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": dial tcp 172.31.30.120:6443: connect: connection refused" interval="800ms" Sep 9 05:11:40.605058 containerd[2018]: time="2025-09-09T05:11:40.603359554Z" level=info msg="connecting to shim c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342" address="unix:///run/containerd/s/60a3a7c173521e9962c762fd773054e34f53e1d9a5e71be0554e4f68583f3b71" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:40.606378 systemd[1]: Started cri-containerd-9704f616c460e94630286fe451dea052b4c4dc99cd42af783c170ab8c7a70727.scope - libcontainer container 9704f616c460e94630286fe451dea052b4c4dc99cd42af783c170ab8c7a70727. Sep 9 05:11:40.612365 containerd[2018]: time="2025-09-09T05:11:40.612296962Z" level=info msg="connecting to shim b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905" address="unix:///run/containerd/s/b4ca5c7fca1c5d159240496ae899a55e6011e3e0266402015e2edf16871d0421" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:40.683418 systemd[1]: Started cri-containerd-b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905.scope - libcontainer container b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905. Sep 9 05:11:40.696895 systemd[1]: Started cri-containerd-c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342.scope - libcontainer container c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342. Sep 9 05:11:40.741426 containerd[2018]: time="2025-09-09T05:11:40.740811299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-120,Uid:75fa037c1679acf9a65749a95fce9c8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9704f616c460e94630286fe451dea052b4c4dc99cd42af783c170ab8c7a70727\"" Sep 9 05:11:40.754322 containerd[2018]: time="2025-09-09T05:11:40.754242779Z" level=info msg="CreateContainer within sandbox \"9704f616c460e94630286fe451dea052b4c4dc99cd42af783c170ab8c7a70727\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:11:40.775794 containerd[2018]: time="2025-09-09T05:11:40.775738283Z" level=info msg="Container 0f7c2ffa3c3cd9851542ec5ea72476cd8837cf0f9bdc6a3fa129ce4f936c77d7: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:40.790131 kubelet[2956]: I0909 05:11:40.790083 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-120" Sep 9 05:11:40.790622 kubelet[2956]: E0909 05:11:40.790574 2956 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.120:6443/api/v1/nodes\": dial tcp 172.31.30.120:6443: connect: connection refused" node="ip-172-31-30-120" Sep 9 05:11:40.798602 containerd[2018]: time="2025-09-09T05:11:40.798282719Z" level=info msg="CreateContainer within sandbox \"9704f616c460e94630286fe451dea052b4c4dc99cd42af783c170ab8c7a70727\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f7c2ffa3c3cd9851542ec5ea72476cd8837cf0f9bdc6a3fa129ce4f936c77d7\"" Sep 9 05:11:40.800453 containerd[2018]: time="2025-09-09T05:11:40.800405303Z" level=info msg="StartContainer for \"0f7c2ffa3c3cd9851542ec5ea72476cd8837cf0f9bdc6a3fa129ce4f936c77d7\"" Sep 9 05:11:40.813255 containerd[2018]: time="2025-09-09T05:11:40.813129323Z" level=info msg="connecting to shim 0f7c2ffa3c3cd9851542ec5ea72476cd8837cf0f9bdc6a3fa129ce4f936c77d7" address="unix:///run/containerd/s/939b29323597e65a13d49006fa67291f1e20a040621f5d8250ff1145bdceceab" protocol=ttrpc version=3 Sep 9 05:11:40.848401 containerd[2018]: time="2025-09-09T05:11:40.848351039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-120,Uid:6498e2f9f31f70a1fac01f264c0ee188,Namespace:kube-system,Attempt:0,} returns sandbox id \"b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905\"" Sep 9 05:11:40.855771 containerd[2018]: time="2025-09-09T05:11:40.855712595Z" level=info msg="CreateContainer within sandbox \"b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:11:40.865893 containerd[2018]: time="2025-09-09T05:11:40.865775195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-120,Uid:e1d4dffe39202dbf299d050179cf0520,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342\"" Sep 9 05:11:40.889334 systemd[1]: Started cri-containerd-0f7c2ffa3c3cd9851542ec5ea72476cd8837cf0f9bdc6a3fa129ce4f936c77d7.scope - libcontainer container 0f7c2ffa3c3cd9851542ec5ea72476cd8837cf0f9bdc6a3fa129ce4f936c77d7. Sep 9 05:11:40.900905 containerd[2018]: time="2025-09-09T05:11:40.900689004Z" level=info msg="CreateContainer within sandbox \"c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:11:40.912234 containerd[2018]: time="2025-09-09T05:11:40.910368672Z" level=info msg="Container 06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:40.961561 containerd[2018]: time="2025-09-09T05:11:40.961497132Z" level=info msg="CreateContainer within sandbox \"b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44\"" Sep 9 05:11:40.964394 containerd[2018]: time="2025-09-09T05:11:40.964331664Z" level=info msg="StartContainer for \"06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44\"" Sep 9 05:11:40.966359 containerd[2018]: time="2025-09-09T05:11:40.966284472Z" level=info msg="Container abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:40.968058 containerd[2018]: time="2025-09-09T05:11:40.967668144Z" level=info msg="connecting to shim 06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44" address="unix:///run/containerd/s/b4ca5c7fca1c5d159240496ae899a55e6011e3e0266402015e2edf16871d0421" protocol=ttrpc version=3 Sep 9 05:11:40.985339 containerd[2018]: time="2025-09-09T05:11:40.985263468Z" level=info msg="CreateContainer within sandbox \"c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89\"" Sep 9 05:11:40.987693 containerd[2018]: time="2025-09-09T05:11:40.987611580Z" level=info msg="StartContainer for \"abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89\"" Sep 9 05:11:40.990935 containerd[2018]: time="2025-09-09T05:11:40.990625572Z" level=info msg="connecting to shim abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89" address="unix:///run/containerd/s/60a3a7c173521e9962c762fd773054e34f53e1d9a5e71be0554e4f68583f3b71" protocol=ttrpc version=3 Sep 9 05:11:41.015506 systemd[1]: Started cri-containerd-06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44.scope - libcontainer container 06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44. Sep 9 05:11:41.075398 systemd[1]: Started cri-containerd-abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89.scope - libcontainer container abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89. Sep 9 05:11:41.091509 containerd[2018]: time="2025-09-09T05:11:41.091165641Z" level=info msg="StartContainer for \"0f7c2ffa3c3cd9851542ec5ea72476cd8837cf0f9bdc6a3fa129ce4f936c77d7\" returns successfully" Sep 9 05:11:41.133697 kubelet[2956]: W0909 05:11:41.133591 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 9 05:11:41.134822 kubelet[2956]: E0909 05:11:41.133693 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-120&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:11:41.177256 kubelet[2956]: W0909 05:11:41.177140 2956 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.120:6443: connect: connection refused Sep 9 05:11:41.177256 kubelet[2956]: E0909 05:11:41.177245 2956 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.120:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:11:41.181292 containerd[2018]: time="2025-09-09T05:11:41.179989161Z" level=info msg="StartContainer for \"06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44\" returns successfully" Sep 9 05:11:41.243260 containerd[2018]: time="2025-09-09T05:11:41.243199209Z" level=info msg="StartContainer for \"abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89\" returns successfully" Sep 9 05:11:41.592858 kubelet[2956]: I0909 05:11:41.592800 2956 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-120" Sep 9 05:11:42.083578 kubelet[2956]: E0909 05:11:42.083508 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:42.090774 kubelet[2956]: E0909 05:11:42.090727 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:42.099776 kubelet[2956]: E0909 05:11:42.099731 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:43.104388 kubelet[2956]: E0909 05:11:43.104329 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:43.104969 kubelet[2956]: E0909 05:11:43.104920 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:43.107271 kubelet[2956]: E0909 05:11:43.107100 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:44.106574 kubelet[2956]: E0909 05:11:44.106511 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:44.113054 kubelet[2956]: E0909 05:11:44.112966 2956 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:44.393247 update_engine[1999]: I20250909 05:11:44.393066 1999 update_attempter.cc:509] Updating boot flags... Sep 9 05:11:45.637688 kubelet[2956]: E0909 05:11:45.637621 2956 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-120\" not found" node="ip-172-31-30-120" Sep 9 05:11:45.707665 kubelet[2956]: I0909 05:11:45.707288 2956 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-120" Sep 9 05:11:45.773306 kubelet[2956]: I0909 05:11:45.773261 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:45.789157 kubelet[2956]: E0909 05:11:45.789113 2956 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-120\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:45.789535 kubelet[2956]: I0909 05:11:45.789338 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:45.792777 kubelet[2956]: E0909 05:11:45.792734 2956 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-120\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:45.794284 kubelet[2956]: I0909 05:11:45.794071 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:45.800642 kubelet[2956]: E0909 05:11:45.800590 2956 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-120\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:45.938843 kubelet[2956]: I0909 05:11:45.938520 2956 apiserver.go:52] "Watching apiserver" Sep 9 05:11:45.974232 kubelet[2956]: I0909 05:11:45.974179 2956 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:11:46.722052 kubelet[2956]: I0909 05:11:46.721987 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:47.407210 kubelet[2956]: I0909 05:11:47.407162 2956 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:47.669131 systemd[1]: Reload requested from client PID 3409 ('systemctl') (unit session-7.scope)... Sep 9 05:11:47.669914 systemd[1]: Reloading... Sep 9 05:11:47.880116 zram_generator::config[3468]: No configuration found. Sep 9 05:11:48.344369 systemd[1]: Reloading finished in 673 ms. Sep 9 05:11:48.404050 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:48.419106 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:11:48.419608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:48.419707 systemd[1]: kubelet.service: Consumed 2.493s CPU time, 127.8M memory peak. Sep 9 05:11:48.422945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:11:48.780367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:11:48.796417 (kubelet)[3513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:11:48.904314 kubelet[3513]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:48.904314 kubelet[3513]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:11:48.904314 kubelet[3513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:11:48.904877 kubelet[3513]: I0909 05:11:48.904454 3513 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:11:48.918381 kubelet[3513]: I0909 05:11:48.918330 3513 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:11:48.919046 kubelet[3513]: I0909 05:11:48.918554 3513 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:11:48.919475 kubelet[3513]: I0909 05:11:48.919433 3513 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:11:48.923489 kubelet[3513]: I0909 05:11:48.922458 3513 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:11:48.927319 kubelet[3513]: I0909 05:11:48.927263 3513 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:11:48.947187 kubelet[3513]: I0909 05:11:48.947130 3513 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:11:48.954852 kubelet[3513]: I0909 05:11:48.953370 3513 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:11:48.955074 kubelet[3513]: I0909 05:11:48.955007 3513 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:11:48.955487 kubelet[3513]: I0909 05:11:48.955166 3513 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-120","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:11:48.955712 kubelet[3513]: I0909 05:11:48.955690 3513 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:11:48.955803 kubelet[3513]: I0909 05:11:48.955787 3513 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:11:48.955957 kubelet[3513]: I0909 05:11:48.955940 3513 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:48.956347 kubelet[3513]: I0909 05:11:48.956325 3513 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:11:48.956474 kubelet[3513]: I0909 05:11:48.956455 3513 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:11:48.957134 kubelet[3513]: I0909 05:11:48.956615 3513 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:11:48.957987 kubelet[3513]: I0909 05:11:48.957958 3513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:11:48.965816 kubelet[3513]: I0909 05:11:48.965351 3513 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:11:48.966219 kubelet[3513]: I0909 05:11:48.966124 3513 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:11:48.969092 kubelet[3513]: I0909 05:11:48.966833 3513 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:11:48.969092 kubelet[3513]: I0909 05:11:48.966917 3513 server.go:1287] "Started kubelet" Sep 9 05:11:48.970314 kubelet[3513]: I0909 05:11:48.970268 3513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:11:48.980283 kubelet[3513]: I0909 05:11:48.980155 3513 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:11:48.982174 kubelet[3513]: I0909 05:11:48.981705 3513 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:11:48.984663 kubelet[3513]: I0909 05:11:48.984565 3513 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:11:48.984944 kubelet[3513]: I0909 05:11:48.984901 3513 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:11:48.986065 kubelet[3513]: I0909 05:11:48.985338 3513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:11:48.989203 kubelet[3513]: I0909 05:11:48.989160 3513 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:11:48.991735 kubelet[3513]: E0909 05:11:48.989525 3513 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-120\" not found" Sep 9 05:11:48.993089 kubelet[3513]: I0909 05:11:48.993042 3513 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:11:48.993243 kubelet[3513]: I0909 05:11:48.993221 3513 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:11:48.995141 kubelet[3513]: I0909 05:11:48.995090 3513 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:11:48.995364 kubelet[3513]: I0909 05:11:48.995319 3513 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:11:49.000935 kubelet[3513]: I0909 05:11:49.000809 3513 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:11:49.011118 kubelet[3513]: I0909 05:11:49.010178 3513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:11:49.013057 kubelet[3513]: E0909 05:11:49.011717 3513 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:11:49.025158 kubelet[3513]: I0909 05:11:49.025102 3513 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:11:49.031190 kubelet[3513]: I0909 05:11:49.031142 3513 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:11:49.031190 kubelet[3513]: I0909 05:11:49.031199 3513 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:11:49.031400 kubelet[3513]: I0909 05:11:49.031217 3513 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:11:49.031400 kubelet[3513]: E0909 05:11:49.031294 3513 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:11:49.100953 kubelet[3513]: E0909 05:11:49.100816 3513 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-120\" not found" Sep 9 05:11:49.133642 kubelet[3513]: E0909 05:11:49.133586 3513 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:11:49.198559 kubelet[3513]: I0909 05:11:49.198513 3513 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:11:49.198559 kubelet[3513]: I0909 05:11:49.198547 3513 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:11:49.198723 kubelet[3513]: I0909 05:11:49.198583 3513 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:11:49.199974 kubelet[3513]: I0909 05:11:49.198849 3513 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:11:49.199974 kubelet[3513]: I0909 05:11:49.198880 3513 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:11:49.199974 kubelet[3513]: I0909 05:11:49.198914 3513 policy_none.go:49] "None policy: Start" Sep 9 05:11:49.199974 kubelet[3513]: I0909 05:11:49.198942 3513 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:11:49.199974 kubelet[3513]: I0909 05:11:49.198962 3513 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:11:49.199974 kubelet[3513]: I0909 05:11:49.199169 3513 state_mem.go:75] "Updated machine memory state" Sep 9 05:11:49.210163 kubelet[3513]: I0909 05:11:49.209989 3513 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:11:49.216203 kubelet[3513]: I0909 05:11:49.215358 3513 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:11:49.216203 kubelet[3513]: I0909 05:11:49.215418 3513 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:11:49.217682 kubelet[3513]: I0909 05:11:49.217620 3513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:11:49.221637 kubelet[3513]: E0909 05:11:49.221520 3513 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:11:49.334807 kubelet[3513]: I0909 05:11:49.334720 3513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:49.334807 kubelet[3513]: I0909 05:11:49.334747 3513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:49.336369 kubelet[3513]: I0909 05:11:49.335397 3513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:49.345729 kubelet[3513]: E0909 05:11:49.345611 3513 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-120\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:49.349141 kubelet[3513]: I0909 05:11:49.345889 3513 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-120" Sep 9 05:11:49.350398 kubelet[3513]: E0909 05:11:49.350334 3513 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-120\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:49.357409 kubelet[3513]: I0909 05:11:49.357263 3513 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-120" Sep 9 05:11:49.357524 kubelet[3513]: I0909 05:11:49.357424 3513 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-120" Sep 9 05:11:49.399009 kubelet[3513]: I0909 05:11:49.398934 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:49.399009 kubelet[3513]: I0909 05:11:49.399008 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:49.399247 kubelet[3513]: I0909 05:11:49.399075 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:49.399247 kubelet[3513]: I0909 05:11:49.399119 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6498e2f9f31f70a1fac01f264c0ee188-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-120\" (UID: \"6498e2f9f31f70a1fac01f264c0ee188\") " pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:49.399247 kubelet[3513]: I0909 05:11:49.399156 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75fa037c1679acf9a65749a95fce9c8f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"75fa037c1679acf9a65749a95fce9c8f\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:49.399247 kubelet[3513]: I0909 05:11:49.399191 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75fa037c1679acf9a65749a95fce9c8f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"75fa037c1679acf9a65749a95fce9c8f\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:49.399247 kubelet[3513]: I0909 05:11:49.399225 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:49.399599 kubelet[3513]: I0909 05:11:49.399259 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1d4dffe39202dbf299d050179cf0520-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-120\" (UID: \"e1d4dffe39202dbf299d050179cf0520\") " pod="kube-system/kube-controller-manager-ip-172-31-30-120" Sep 9 05:11:49.399599 kubelet[3513]: I0909 05:11:49.399295 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75fa037c1679acf9a65749a95fce9c8f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-120\" (UID: \"75fa037c1679acf9a65749a95fce9c8f\") " pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:49.961527 kubelet[3513]: I0909 05:11:49.961409 3513 apiserver.go:52] "Watching apiserver" Sep 9 05:11:49.996573 kubelet[3513]: I0909 05:11:49.996322 3513 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:11:50.161069 kubelet[3513]: I0909 05:11:50.157572 3513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:50.161450 kubelet[3513]: I0909 05:11:50.161402 3513 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:50.176278 kubelet[3513]: E0909 05:11:50.176222 3513 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-120\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-120" Sep 9 05:11:50.180921 kubelet[3513]: E0909 05:11:50.180861 3513 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-120\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-120" Sep 9 05:11:50.212125 kubelet[3513]: I0909 05:11:50.211913 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-120" podStartSLOduration=3.211889982 podStartE2EDuration="3.211889982s" podCreationTimestamp="2025-09-09 05:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:50.210084246 +0000 UTC m=+1.403301524" watchObservedRunningTime="2025-09-09 05:11:50.211889982 +0000 UTC m=+1.405107236" Sep 9 05:11:50.240248 kubelet[3513]: I0909 05:11:50.240038 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-120" podStartSLOduration=1.23999793 podStartE2EDuration="1.23999793s" podCreationTimestamp="2025-09-09 05:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:50.23884491 +0000 UTC m=+1.432062188" watchObservedRunningTime="2025-09-09 05:11:50.23999793 +0000 UTC m=+1.433215208" Sep 9 05:11:50.286070 kubelet[3513]: I0909 05:11:50.285622 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-120" podStartSLOduration=4.285602622 podStartE2EDuration="4.285602622s" podCreationTimestamp="2025-09-09 05:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:50.257083734 +0000 UTC m=+1.450301012" watchObservedRunningTime="2025-09-09 05:11:50.285602622 +0000 UTC m=+1.478819888" Sep 9 05:11:54.517537 kubelet[3513]: I0909 05:11:54.517381 3513 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:11:54.518937 containerd[2018]: time="2025-09-09T05:11:54.518256803Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:11:54.521587 kubelet[3513]: I0909 05:11:54.519490 3513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:11:55.307307 systemd[1]: Created slice kubepods-besteffort-pod2fd5ca94_f89f_48fc_b25f_0fc758f68fb0.slice - libcontainer container kubepods-besteffort-pod2fd5ca94_f89f_48fc_b25f_0fc758f68fb0.slice. Sep 9 05:11:55.345518 kubelet[3513]: I0909 05:11:55.345337 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2fd5ca94-f89f-48fc-b25f-0fc758f68fb0-var-lib-calico\") pod \"tigera-operator-755d956888-jswz5\" (UID: \"2fd5ca94-f89f-48fc-b25f-0fc758f68fb0\") " pod="tigera-operator/tigera-operator-755d956888-jswz5" Sep 9 05:11:55.345518 kubelet[3513]: I0909 05:11:55.345443 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58sv\" (UniqueName: \"kubernetes.io/projected/2fd5ca94-f89f-48fc-b25f-0fc758f68fb0-kube-api-access-p58sv\") pod \"tigera-operator-755d956888-jswz5\" (UID: \"2fd5ca94-f89f-48fc-b25f-0fc758f68fb0\") " pod="tigera-operator/tigera-operator-755d956888-jswz5" Sep 9 05:11:55.620796 systemd[1]: Created slice kubepods-besteffort-poddf6156a5_70c4_4e01_b395_ddc593fcf560.slice - libcontainer container kubepods-besteffort-poddf6156a5_70c4_4e01_b395_ddc593fcf560.slice. Sep 9 05:11:55.626419 containerd[2018]: time="2025-09-09T05:11:55.625321465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jswz5,Uid:2fd5ca94-f89f-48fc-b25f-0fc758f68fb0,Namespace:tigera-operator,Attempt:0,}" Sep 9 05:11:55.649146 kubelet[3513]: I0909 05:11:55.648898 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df6156a5-70c4-4e01-b395-ddc593fcf560-kube-proxy\") pod \"kube-proxy-728f7\" (UID: \"df6156a5-70c4-4e01-b395-ddc593fcf560\") " pod="kube-system/kube-proxy-728f7" Sep 9 05:11:55.649146 kubelet[3513]: I0909 05:11:55.649000 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbv8d\" (UniqueName: \"kubernetes.io/projected/df6156a5-70c4-4e01-b395-ddc593fcf560-kube-api-access-jbv8d\") pod \"kube-proxy-728f7\" (UID: \"df6156a5-70c4-4e01-b395-ddc593fcf560\") " pod="kube-system/kube-proxy-728f7" Sep 9 05:11:55.649146 kubelet[3513]: I0909 05:11:55.649078 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df6156a5-70c4-4e01-b395-ddc593fcf560-xtables-lock\") pod \"kube-proxy-728f7\" (UID: \"df6156a5-70c4-4e01-b395-ddc593fcf560\") " pod="kube-system/kube-proxy-728f7" Sep 9 05:11:55.649146 kubelet[3513]: I0909 05:11:55.649118 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df6156a5-70c4-4e01-b395-ddc593fcf560-lib-modules\") pod \"kube-proxy-728f7\" (UID: \"df6156a5-70c4-4e01-b395-ddc593fcf560\") " pod="kube-system/kube-proxy-728f7" Sep 9 05:11:55.679832 containerd[2018]: time="2025-09-09T05:11:55.677987305Z" level=info msg="connecting to shim ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d" address="unix:///run/containerd/s/370e936ddafab9915225570a7b4b13ce6afe24f07bd917fe5e0e7a3f66dc75cc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:55.728367 systemd[1]: Started cri-containerd-ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d.scope - libcontainer container ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d. Sep 9 05:11:55.823665 containerd[2018]: time="2025-09-09T05:11:55.823604990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jswz5,Uid:2fd5ca94-f89f-48fc-b25f-0fc758f68fb0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d\"" Sep 9 05:11:55.827342 containerd[2018]: time="2025-09-09T05:11:55.826484606Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 05:11:55.932477 containerd[2018]: time="2025-09-09T05:11:55.932349998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-728f7,Uid:df6156a5-70c4-4e01-b395-ddc593fcf560,Namespace:kube-system,Attempt:0,}" Sep 9 05:11:55.986104 containerd[2018]: time="2025-09-09T05:11:55.985233314Z" level=info msg="connecting to shim 5b85c42c7963c1c4459d4044f4b47539c0630e7526ad475126b83433f7226e8f" address="unix:///run/containerd/s/b7d5686c8b61291405e6f570a3a8cc4914ae53f2d9dd23a8e8ee234250736998" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:11:56.036459 systemd[1]: Started cri-containerd-5b85c42c7963c1c4459d4044f4b47539c0630e7526ad475126b83433f7226e8f.scope - libcontainer container 5b85c42c7963c1c4459d4044f4b47539c0630e7526ad475126b83433f7226e8f. Sep 9 05:11:56.103848 containerd[2018]: time="2025-09-09T05:11:56.103791995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-728f7,Uid:df6156a5-70c4-4e01-b395-ddc593fcf560,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b85c42c7963c1c4459d4044f4b47539c0630e7526ad475126b83433f7226e8f\"" Sep 9 05:11:56.111190 containerd[2018]: time="2025-09-09T05:11:56.111110819Z" level=info msg="CreateContainer within sandbox \"5b85c42c7963c1c4459d4044f4b47539c0630e7526ad475126b83433f7226e8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:11:56.131291 containerd[2018]: time="2025-09-09T05:11:56.131241707Z" level=info msg="Container db93b7b1b80af900bc9756e5b331cfd647d2b555189a4ebb3286c34acd076482: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:56.147203 containerd[2018]: time="2025-09-09T05:11:56.147153443Z" level=info msg="CreateContainer within sandbox \"5b85c42c7963c1c4459d4044f4b47539c0630e7526ad475126b83433f7226e8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db93b7b1b80af900bc9756e5b331cfd647d2b555189a4ebb3286c34acd076482\"" Sep 9 05:11:56.148942 containerd[2018]: time="2025-09-09T05:11:56.148849643Z" level=info msg="StartContainer for \"db93b7b1b80af900bc9756e5b331cfd647d2b555189a4ebb3286c34acd076482\"" Sep 9 05:11:56.153630 containerd[2018]: time="2025-09-09T05:11:56.153517235Z" level=info msg="connecting to shim db93b7b1b80af900bc9756e5b331cfd647d2b555189a4ebb3286c34acd076482" address="unix:///run/containerd/s/b7d5686c8b61291405e6f570a3a8cc4914ae53f2d9dd23a8e8ee234250736998" protocol=ttrpc version=3 Sep 9 05:11:56.202397 systemd[1]: Started cri-containerd-db93b7b1b80af900bc9756e5b331cfd647d2b555189a4ebb3286c34acd076482.scope - libcontainer container db93b7b1b80af900bc9756e5b331cfd647d2b555189a4ebb3286c34acd076482. Sep 9 05:11:56.291163 containerd[2018]: time="2025-09-09T05:11:56.291048024Z" level=info msg="StartContainer for \"db93b7b1b80af900bc9756e5b331cfd647d2b555189a4ebb3286c34acd076482\" returns successfully" Sep 9 05:11:57.164177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766262688.mount: Deactivated successfully. Sep 9 05:11:58.136721 containerd[2018]: time="2025-09-09T05:11:58.136650793Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:58.138318 containerd[2018]: time="2025-09-09T05:11:58.138000349Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 05:11:58.139379 containerd[2018]: time="2025-09-09T05:11:58.139287901Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:58.143061 containerd[2018]: time="2025-09-09T05:11:58.142981753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:11:58.145081 containerd[2018]: time="2025-09-09T05:11:58.144317401Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.316594155s" Sep 9 05:11:58.145081 containerd[2018]: time="2025-09-09T05:11:58.144373537Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 05:11:58.150949 containerd[2018]: time="2025-09-09T05:11:58.150192229Z" level=info msg="CreateContainer within sandbox \"ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 05:11:58.163918 containerd[2018]: time="2025-09-09T05:11:58.163864453Z" level=info msg="Container 87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:11:58.180324 containerd[2018]: time="2025-09-09T05:11:58.180263065Z" level=info msg="CreateContainer within sandbox \"ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\"" Sep 9 05:11:58.181656 containerd[2018]: time="2025-09-09T05:11:58.181522357Z" level=info msg="StartContainer for \"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\"" Sep 9 05:11:58.184097 containerd[2018]: time="2025-09-09T05:11:58.184003213Z" level=info msg="connecting to shim 87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f" address="unix:///run/containerd/s/370e936ddafab9915225570a7b4b13ce6afe24f07bd917fe5e0e7a3f66dc75cc" protocol=ttrpc version=3 Sep 9 05:11:58.243679 systemd[1]: Started cri-containerd-87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f.scope - libcontainer container 87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f. Sep 9 05:11:58.302680 containerd[2018]: time="2025-09-09T05:11:58.302597654Z" level=info msg="StartContainer for \"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\" returns successfully" Sep 9 05:11:59.226864 kubelet[3513]: I0909 05:11:59.226743 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-728f7" podStartSLOduration=4.226722063 podStartE2EDuration="4.226722063s" podCreationTimestamp="2025-09-09 05:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:11:57.213747709 +0000 UTC m=+8.406964975" watchObservedRunningTime="2025-09-09 05:11:59.226722063 +0000 UTC m=+10.419939329" Sep 9 05:12:05.193127 sudo[2376]: pam_unix(sudo:session): session closed for user root Sep 9 05:12:05.220470 sshd[2375]: Connection closed by 147.75.109.163 port 56328 Sep 9 05:12:05.220957 sshd-session[2372]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:05.234873 systemd[1]: sshd@6-172.31.30.120:22-147.75.109.163:56328.service: Deactivated successfully. Sep 9 05:12:05.236095 systemd-logind[1998]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:12:05.245231 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:12:05.247204 systemd[1]: session-7.scope: Consumed 10.951s CPU time, 223.3M memory peak. Sep 9 05:12:05.255385 systemd-logind[1998]: Removed session 7. Sep 9 05:12:17.681822 kubelet[3513]: I0909 05:12:17.681230 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-jswz5" podStartSLOduration=20.360628891 podStartE2EDuration="22.68119267s" podCreationTimestamp="2025-09-09 05:11:55 +0000 UTC" firstStartedPulling="2025-09-09 05:11:55.825743834 +0000 UTC m=+7.018961100" lastFinishedPulling="2025-09-09 05:11:58.146307613 +0000 UTC m=+9.339524879" observedRunningTime="2025-09-09 05:11:59.228174747 +0000 UTC m=+10.421392013" watchObservedRunningTime="2025-09-09 05:12:17.68119267 +0000 UTC m=+28.874409960" Sep 9 05:12:17.703134 systemd[1]: Created slice kubepods-besteffort-pod3619f919_e316_4524_a3c1_4e2d67d37e5c.slice - libcontainer container kubepods-besteffort-pod3619f919_e316_4524_a3c1_4e2d67d37e5c.slice. Sep 9 05:12:17.806439 kubelet[3513]: I0909 05:12:17.806222 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3619f919-e316-4524-a3c1-4e2d67d37e5c-typha-certs\") pod \"calico-typha-7487954698-cnjq4\" (UID: \"3619f919-e316-4524-a3c1-4e2d67d37e5c\") " pod="calico-system/calico-typha-7487954698-cnjq4" Sep 9 05:12:17.806439 kubelet[3513]: I0909 05:12:17.806292 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3619f919-e316-4524-a3c1-4e2d67d37e5c-tigera-ca-bundle\") pod \"calico-typha-7487954698-cnjq4\" (UID: \"3619f919-e316-4524-a3c1-4e2d67d37e5c\") " pod="calico-system/calico-typha-7487954698-cnjq4" Sep 9 05:12:17.806439 kubelet[3513]: I0909 05:12:17.806333 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxzj\" (UniqueName: \"kubernetes.io/projected/3619f919-e316-4524-a3c1-4e2d67d37e5c-kube-api-access-gbxzj\") pod \"calico-typha-7487954698-cnjq4\" (UID: \"3619f919-e316-4524-a3c1-4e2d67d37e5c\") " pod="calico-system/calico-typha-7487954698-cnjq4" Sep 9 05:12:17.955772 systemd[1]: Created slice kubepods-besteffort-podb80ce344_32ff_4d1e_a39d_f9362b1c0166.slice - libcontainer container kubepods-besteffort-podb80ce344_32ff_4d1e_a39d_f9362b1c0166.slice. Sep 9 05:12:18.007356 kubelet[3513]: I0909 05:12:18.007297 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b80ce344-32ff-4d1e-a39d-f9362b1c0166-tigera-ca-bundle\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.007835 kubelet[3513]: I0909 05:12:18.007807 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-xtables-lock\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.008074 kubelet[3513]: I0909 05:12:18.008048 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-cni-log-dir\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.008244 kubelet[3513]: I0909 05:12:18.008220 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-var-run-calico\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.008401 kubelet[3513]: I0909 05:12:18.008377 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfrm\" (UniqueName: \"kubernetes.io/projected/b80ce344-32ff-4d1e-a39d-f9362b1c0166-kube-api-access-6kfrm\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.008606 kubelet[3513]: I0909 05:12:18.008531 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-var-lib-calico\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.009511 kubelet[3513]: I0909 05:12:18.008886 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-cni-net-dir\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.009511 kubelet[3513]: I0909 05:12:18.008938 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b80ce344-32ff-4d1e-a39d-f9362b1c0166-node-certs\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.009511 kubelet[3513]: I0909 05:12:18.008976 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-policysync\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.009511 kubelet[3513]: I0909 05:12:18.009197 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-cni-bin-dir\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.009511 kubelet[3513]: I0909 05:12:18.009278 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-flexvol-driver-host\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.009838 kubelet[3513]: I0909 05:12:18.009346 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b80ce344-32ff-4d1e-a39d-f9362b1c0166-lib-modules\") pod \"calico-node-bvhl9\" (UID: \"b80ce344-32ff-4d1e-a39d-f9362b1c0166\") " pod="calico-system/calico-node-bvhl9" Sep 9 05:12:18.014781 containerd[2018]: time="2025-09-09T05:12:18.013940084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7487954698-cnjq4,Uid:3619f919-e316-4524-a3c1-4e2d67d37e5c,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:18.079766 containerd[2018]: time="2025-09-09T05:12:18.079186256Z" level=info msg="connecting to shim a5aaafe1b702b6441f05cd6ff2e061f81b2721b042907bdb0b4ce640298d70bd" address="unix:///run/containerd/s/0bff0c953aa99c035412bf08cc0821cbe76efbe809939a1fe98f58367770cc00" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:18.117748 kubelet[3513]: E0909 05:12:18.117692 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.117748 kubelet[3513]: W0909 05:12:18.117736 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.117969 kubelet[3513]: E0909 05:12:18.117774 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.120560 kubelet[3513]: E0909 05:12:18.120391 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.120560 kubelet[3513]: W0909 05:12:18.120452 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.120560 kubelet[3513]: E0909 05:12:18.120485 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.122301 kubelet[3513]: E0909 05:12:18.122113 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.122301 kubelet[3513]: W0909 05:12:18.122290 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.123100 kubelet[3513]: E0909 05:12:18.122751 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.124626 kubelet[3513]: E0909 05:12:18.124345 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.124772 kubelet[3513]: W0909 05:12:18.124617 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.125211 kubelet[3513]: E0909 05:12:18.125001 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.127742 kubelet[3513]: E0909 05:12:18.127552 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.127742 kubelet[3513]: W0909 05:12:18.127702 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.128611 kubelet[3513]: E0909 05:12:18.128342 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.133117 kubelet[3513]: E0909 05:12:18.130907 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.133117 kubelet[3513]: W0909 05:12:18.131499 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.133117 kubelet[3513]: E0909 05:12:18.131717 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.133881 kubelet[3513]: E0909 05:12:18.133791 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.133881 kubelet[3513]: W0909 05:12:18.133830 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.133881 kubelet[3513]: E0909 05:12:18.133877 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.139922 kubelet[3513]: E0909 05:12:18.139504 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.139922 kubelet[3513]: W0909 05:12:18.139695 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.140568 kubelet[3513]: E0909 05:12:18.140208 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.149515 kubelet[3513]: E0909 05:12:18.147364 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.149515 kubelet[3513]: W0909 05:12:18.149166 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.149515 kubelet[3513]: E0909 05:12:18.149247 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.153355 kubelet[3513]: E0909 05:12:18.152738 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.153355 kubelet[3513]: W0909 05:12:18.152796 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.153355 kubelet[3513]: E0909 05:12:18.152829 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.178757 kubelet[3513]: E0909 05:12:18.178551 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.178757 kubelet[3513]: W0909 05:12:18.178585 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.178757 kubelet[3513]: E0909 05:12:18.178615 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.179655 systemd[1]: Started cri-containerd-a5aaafe1b702b6441f05cd6ff2e061f81b2721b042907bdb0b4ce640298d70bd.scope - libcontainer container a5aaafe1b702b6441f05cd6ff2e061f81b2721b042907bdb0b4ce640298d70bd. Sep 9 05:12:18.224891 kubelet[3513]: E0909 05:12:18.224441 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxj2z" podUID="76a55f58-17dd-4feb-9f04-7ed611b8fe4a" Sep 9 05:12:18.266778 containerd[2018]: time="2025-09-09T05:12:18.266576001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bvhl9,Uid:b80ce344-32ff-4d1e-a39d-f9362b1c0166,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:18.279842 kubelet[3513]: E0909 05:12:18.279793 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.280176 kubelet[3513]: W0909 05:12:18.280013 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.280176 kubelet[3513]: E0909 05:12:18.280080 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.283147 kubelet[3513]: E0909 05:12:18.280802 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.283147 kubelet[3513]: W0909 05:12:18.280827 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.283147 kubelet[3513]: E0909 05:12:18.280927 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.283147 kubelet[3513]: E0909 05:12:18.281460 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.283147 kubelet[3513]: W0909 05:12:18.281482 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.283147 kubelet[3513]: E0909 05:12:18.281536 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.283147 kubelet[3513]: E0909 05:12:18.281972 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.283147 kubelet[3513]: W0909 05:12:18.281992 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.283147 kubelet[3513]: E0909 05:12:18.282014 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.283147 kubelet[3513]: E0909 05:12:18.282442 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.283693 kubelet[3513]: W0909 05:12:18.282464 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.283693 kubelet[3513]: E0909 05:12:18.282487 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.283693 kubelet[3513]: E0909 05:12:18.282826 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.283693 kubelet[3513]: W0909 05:12:18.282845 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.283693 kubelet[3513]: E0909 05:12:18.282882 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.284933 kubelet[3513]: E0909 05:12:18.284253 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.284933 kubelet[3513]: W0909 05:12:18.284282 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.284933 kubelet[3513]: E0909 05:12:18.284310 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.284933 kubelet[3513]: E0909 05:12:18.284701 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.284933 kubelet[3513]: W0909 05:12:18.284722 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.284933 kubelet[3513]: E0909 05:12:18.284744 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.286211 kubelet[3513]: E0909 05:12:18.285677 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.286211 kubelet[3513]: W0909 05:12:18.286069 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.286211 kubelet[3513]: E0909 05:12:18.286110 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.287469 kubelet[3513]: E0909 05:12:18.287183 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.287469 kubelet[3513]: W0909 05:12:18.287228 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.287469 kubelet[3513]: E0909 05:12:18.287263 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.288255 kubelet[3513]: E0909 05:12:18.288222 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.288537 kubelet[3513]: W0909 05:12:18.288405 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.288537 kubelet[3513]: E0909 05:12:18.288443 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.289696 kubelet[3513]: E0909 05:12:18.289417 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.289696 kubelet[3513]: W0909 05:12:18.289450 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.289696 kubelet[3513]: E0909 05:12:18.289485 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.290130 kubelet[3513]: E0909 05:12:18.290106 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.290426 kubelet[3513]: W0909 05:12:18.290228 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.290426 kubelet[3513]: E0909 05:12:18.290304 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.291041 kubelet[3513]: E0909 05:12:18.290876 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.291041 kubelet[3513]: W0909 05:12:18.290905 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.291041 kubelet[3513]: E0909 05:12:18.290933 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.292247 kubelet[3513]: E0909 05:12:18.291626 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.292247 kubelet[3513]: W0909 05:12:18.291650 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.292247 kubelet[3513]: E0909 05:12:18.291677 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.293201 kubelet[3513]: E0909 05:12:18.293103 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.293866 kubelet[3513]: W0909 05:12:18.293349 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.293866 kubelet[3513]: E0909 05:12:18.293388 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.295879 kubelet[3513]: E0909 05:12:18.295527 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.295879 kubelet[3513]: W0909 05:12:18.295566 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.295879 kubelet[3513]: E0909 05:12:18.295598 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.297408 kubelet[3513]: E0909 05:12:18.297245 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.297408 kubelet[3513]: W0909 05:12:18.297279 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.297408 kubelet[3513]: E0909 05:12:18.297310 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.298569 kubelet[3513]: E0909 05:12:18.298520 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.298909 kubelet[3513]: W0909 05:12:18.298766 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.298909 kubelet[3513]: E0909 05:12:18.298808 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.299848 kubelet[3513]: E0909 05:12:18.299666 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.299848 kubelet[3513]: W0909 05:12:18.299702 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.299848 kubelet[3513]: E0909 05:12:18.299732 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.316507 kubelet[3513]: E0909 05:12:18.316281 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.316507 kubelet[3513]: W0909 05:12:18.316427 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.316507 kubelet[3513]: E0909 05:12:18.316461 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.318164 kubelet[3513]: I0909 05:12:18.316731 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/76a55f58-17dd-4feb-9f04-7ed611b8fe4a-registration-dir\") pod \"csi-node-driver-vxj2z\" (UID: \"76a55f58-17dd-4feb-9f04-7ed611b8fe4a\") " pod="calico-system/csi-node-driver-vxj2z" Sep 9 05:12:18.318850 kubelet[3513]: E0909 05:12:18.318706 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.323048 kubelet[3513]: W0909 05:12:18.322538 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.323048 kubelet[3513]: E0909 05:12:18.322689 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.325113 kubelet[3513]: E0909 05:12:18.325067 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.325485 kubelet[3513]: W0909 05:12:18.325295 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.325485 kubelet[3513]: E0909 05:12:18.325376 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.326947 kubelet[3513]: E0909 05:12:18.326818 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.327478 kubelet[3513]: W0909 05:12:18.327326 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.328549 kubelet[3513]: E0909 05:12:18.328169 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.328549 kubelet[3513]: I0909 05:12:18.328239 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76a55f58-17dd-4feb-9f04-7ed611b8fe4a-kubelet-dir\") pod \"csi-node-driver-vxj2z\" (UID: \"76a55f58-17dd-4feb-9f04-7ed611b8fe4a\") " pod="calico-system/csi-node-driver-vxj2z" Sep 9 05:12:18.330515 kubelet[3513]: E0909 05:12:18.330446 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.330515 kubelet[3513]: W0909 05:12:18.330488 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.330816 kubelet[3513]: E0909 05:12:18.330566 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.333187 kubelet[3513]: E0909 05:12:18.333137 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.333187 kubelet[3513]: W0909 05:12:18.333175 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.333496 kubelet[3513]: E0909 05:12:18.333250 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.334273 kubelet[3513]: E0909 05:12:18.334013 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.334273 kubelet[3513]: W0909 05:12:18.334266 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.334757 kubelet[3513]: E0909 05:12:18.334301 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.334757 kubelet[3513]: I0909 05:12:18.334358 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlw5b\" (UniqueName: \"kubernetes.io/projected/76a55f58-17dd-4feb-9f04-7ed611b8fe4a-kube-api-access-qlw5b\") pod \"csi-node-driver-vxj2z\" (UID: \"76a55f58-17dd-4feb-9f04-7ed611b8fe4a\") " pod="calico-system/csi-node-driver-vxj2z" Sep 9 05:12:18.335623 kubelet[3513]: E0909 05:12:18.335581 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.336535 kubelet[3513]: W0909 05:12:18.335618 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.336535 kubelet[3513]: E0909 05:12:18.335666 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.336535 kubelet[3513]: I0909 05:12:18.335714 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/76a55f58-17dd-4feb-9f04-7ed611b8fe4a-varrun\") pod \"csi-node-driver-vxj2z\" (UID: \"76a55f58-17dd-4feb-9f04-7ed611b8fe4a\") " pod="calico-system/csi-node-driver-vxj2z" Sep 9 05:12:18.338921 containerd[2018]: time="2025-09-09T05:12:18.338844466Z" level=info msg="connecting to shim daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5" address="unix:///run/containerd/s/a0f6bce7479ab4ce7e68611a9f89f2b01ab0143a918e8521916ea79c8422e2b4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:18.339873 kubelet[3513]: E0909 05:12:18.339244 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.339873 kubelet[3513]: W0909 05:12:18.339275 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.339873 kubelet[3513]: E0909 05:12:18.339322 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.341842 kubelet[3513]: E0909 05:12:18.341474 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.341842 kubelet[3513]: W0909 05:12:18.341508 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.341842 kubelet[3513]: E0909 05:12:18.341586 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.342808 kubelet[3513]: E0909 05:12:18.342627 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.342808 kubelet[3513]: W0909 05:12:18.342658 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.342808 kubelet[3513]: E0909 05:12:18.342730 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.342808 kubelet[3513]: I0909 05:12:18.342783 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/76a55f58-17dd-4feb-9f04-7ed611b8fe4a-socket-dir\") pod \"csi-node-driver-vxj2z\" (UID: \"76a55f58-17dd-4feb-9f04-7ed611b8fe4a\") " pod="calico-system/csi-node-driver-vxj2z" Sep 9 05:12:18.344393 kubelet[3513]: E0909 05:12:18.344112 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.344393 kubelet[3513]: W0909 05:12:18.344244 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.344393 kubelet[3513]: E0909 05:12:18.344294 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.345312 kubelet[3513]: E0909 05:12:18.345162 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.345702 kubelet[3513]: W0909 05:12:18.345474 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.346167 kubelet[3513]: E0909 05:12:18.346087 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.346531 kubelet[3513]: E0909 05:12:18.346494 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.347090 kubelet[3513]: W0909 05:12:18.346724 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.347090 kubelet[3513]: E0909 05:12:18.346758 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.347831 kubelet[3513]: E0909 05:12:18.347693 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.348231 kubelet[3513]: W0909 05:12:18.348012 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.348231 kubelet[3513]: E0909 05:12:18.348183 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.418263 systemd[1]: Started cri-containerd-daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5.scope - libcontainer container daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5. Sep 9 05:12:18.444408 kubelet[3513]: E0909 05:12:18.444370 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.444743 kubelet[3513]: W0909 05:12:18.444555 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.444743 kubelet[3513]: E0909 05:12:18.444595 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.445880 kubelet[3513]: E0909 05:12:18.445493 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.445880 kubelet[3513]: W0909 05:12:18.445670 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.445880 kubelet[3513]: E0909 05:12:18.445720 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.447113 kubelet[3513]: E0909 05:12:18.446852 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.447113 kubelet[3513]: W0909 05:12:18.446883 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.447113 kubelet[3513]: E0909 05:12:18.446947 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.448291 kubelet[3513]: E0909 05:12:18.447900 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.448291 kubelet[3513]: W0909 05:12:18.447931 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.448291 kubelet[3513]: E0909 05:12:18.448129 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.449040 kubelet[3513]: E0909 05:12:18.448799 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.449040 kubelet[3513]: W0909 05:12:18.448827 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.449522 kubelet[3513]: E0909 05:12:18.449276 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.449890 kubelet[3513]: E0909 05:12:18.449808 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.450204 kubelet[3513]: W0909 05:12:18.450067 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.450491 kubelet[3513]: E0909 05:12:18.450352 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.451050 kubelet[3513]: E0909 05:12:18.450987 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.451050 kubelet[3513]: W0909 05:12:18.451039 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.451511 kubelet[3513]: E0909 05:12:18.451082 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.452987 kubelet[3513]: E0909 05:12:18.452939 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.452987 kubelet[3513]: W0909 05:12:18.452977 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.453409 kubelet[3513]: E0909 05:12:18.453348 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.454413 kubelet[3513]: E0909 05:12:18.454366 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.454413 kubelet[3513]: W0909 05:12:18.454403 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.454683 kubelet[3513]: E0909 05:12:18.454482 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.455497 kubelet[3513]: E0909 05:12:18.455453 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.455497 kubelet[3513]: W0909 05:12:18.455489 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.455883 kubelet[3513]: E0909 05:12:18.455623 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.456198 kubelet[3513]: E0909 05:12:18.456160 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.456198 kubelet[3513]: W0909 05:12:18.456192 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.456577 kubelet[3513]: E0909 05:12:18.456264 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.457488 kubelet[3513]: E0909 05:12:18.457441 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.457488 kubelet[3513]: W0909 05:12:18.457477 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.457871 kubelet[3513]: E0909 05:12:18.457563 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.458981 kubelet[3513]: E0909 05:12:18.458930 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.458981 kubelet[3513]: W0909 05:12:18.458980 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.459429 kubelet[3513]: E0909 05:12:18.459244 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.459678 kubelet[3513]: E0909 05:12:18.459641 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.459761 kubelet[3513]: W0909 05:12:18.459677 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.461064 kubelet[3513]: E0909 05:12:18.460042 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.462205 kubelet[3513]: E0909 05:12:18.462156 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.462205 kubelet[3513]: W0909 05:12:18.462202 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.462502 kubelet[3513]: E0909 05:12:18.462345 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.462757 kubelet[3513]: E0909 05:12:18.462721 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.462757 kubelet[3513]: W0909 05:12:18.462752 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.463002 kubelet[3513]: E0909 05:12:18.462866 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.464211 kubelet[3513]: E0909 05:12:18.464165 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.464211 kubelet[3513]: W0909 05:12:18.464202 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.464611 kubelet[3513]: E0909 05:12:18.464380 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.464839 kubelet[3513]: E0909 05:12:18.464805 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.464920 kubelet[3513]: W0909 05:12:18.464835 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.466284 kubelet[3513]: E0909 05:12:18.464984 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.467412 kubelet[3513]: E0909 05:12:18.467363 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.467412 kubelet[3513]: W0909 05:12:18.467402 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.467685 kubelet[3513]: E0909 05:12:18.467511 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.467835 kubelet[3513]: E0909 05:12:18.467798 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.467835 kubelet[3513]: W0909 05:12:18.467828 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.468148 kubelet[3513]: E0909 05:12:18.467880 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.468416 kubelet[3513]: E0909 05:12:18.468380 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.468416 kubelet[3513]: W0909 05:12:18.468412 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.468742 kubelet[3513]: E0909 05:12:18.468560 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.469235 kubelet[3513]: E0909 05:12:18.469205 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.469515 kubelet[3513]: W0909 05:12:18.469350 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.469515 kubelet[3513]: E0909 05:12:18.469407 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.471885 kubelet[3513]: E0909 05:12:18.471817 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.471885 kubelet[3513]: W0909 05:12:18.471855 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.471885 kubelet[3513]: E0909 05:12:18.471902 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.473248 kubelet[3513]: E0909 05:12:18.473005 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.473248 kubelet[3513]: W0909 05:12:18.473063 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.475940 kubelet[3513]: E0909 05:12:18.475328 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.475940 kubelet[3513]: W0909 05:12:18.475367 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.475940 kubelet[3513]: E0909 05:12:18.475451 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.475940 kubelet[3513]: E0909 05:12:18.475503 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.523849 kubelet[3513]: E0909 05:12:18.523798 3513 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:12:18.524151 kubelet[3513]: W0909 05:12:18.524042 3513 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:12:18.524151 kubelet[3513]: E0909 05:12:18.524084 3513 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:12:18.552060 containerd[2018]: time="2025-09-09T05:12:18.551914967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bvhl9,Uid:b80ce344-32ff-4d1e-a39d-f9362b1c0166,Namespace:calico-system,Attempt:0,} returns sandbox id \"daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5\"" Sep 9 05:12:18.555381 containerd[2018]: time="2025-09-09T05:12:18.555038435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 05:12:18.677973 containerd[2018]: time="2025-09-09T05:12:18.677908055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7487954698-cnjq4,Uid:3619f919-e316-4524-a3c1-4e2d67d37e5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5aaafe1b702b6441f05cd6ff2e061f81b2721b042907bdb0b4ce640298d70bd\"" Sep 9 05:12:20.033043 kubelet[3513]: E0909 05:12:20.032628 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxj2z" podUID="76a55f58-17dd-4feb-9f04-7ed611b8fe4a" Sep 9 05:12:20.068673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803508245.mount: Deactivated successfully. Sep 9 05:12:20.577092 containerd[2018]: time="2025-09-09T05:12:20.576999157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:20.580196 containerd[2018]: time="2025-09-09T05:12:20.580087381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 9 05:12:20.583561 containerd[2018]: time="2025-09-09T05:12:20.583261105Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:20.588863 containerd[2018]: time="2025-09-09T05:12:20.588775837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:20.590979 containerd[2018]: time="2025-09-09T05:12:20.590193169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 2.035091614s" Sep 9 05:12:20.590979 containerd[2018]: time="2025-09-09T05:12:20.590252365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 05:12:20.594038 containerd[2018]: time="2025-09-09T05:12:20.593961373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 05:12:20.599900 containerd[2018]: time="2025-09-09T05:12:20.599617117Z" level=info msg="CreateContainer within sandbox \"daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 05:12:20.631611 containerd[2018]: time="2025-09-09T05:12:20.631560613Z" level=info msg="Container 1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:20.658733 containerd[2018]: time="2025-09-09T05:12:20.658634053Z" level=info msg="CreateContainer within sandbox \"daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8\"" Sep 9 05:12:20.660151 containerd[2018]: time="2025-09-09T05:12:20.659828941Z" level=info msg="StartContainer for \"1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8\"" Sep 9 05:12:20.664635 containerd[2018]: time="2025-09-09T05:12:20.664529461Z" level=info msg="connecting to shim 1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8" address="unix:///run/containerd/s/a0f6bce7479ab4ce7e68611a9f89f2b01ab0143a918e8521916ea79c8422e2b4" protocol=ttrpc version=3 Sep 9 05:12:20.715410 systemd[1]: Started cri-containerd-1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8.scope - libcontainer container 1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8. Sep 9 05:12:20.837453 containerd[2018]: time="2025-09-09T05:12:20.837312554Z" level=info msg="StartContainer for \"1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8\" returns successfully" Sep 9 05:12:20.898213 systemd[1]: cri-containerd-1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8.scope: Deactivated successfully. Sep 9 05:12:20.908166 containerd[2018]: time="2025-09-09T05:12:20.907948238Z" level=info msg="received exit event container_id:\"1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8\" id:\"1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8\" pid:4118 exited_at:{seconds:1757394740 nanos:907339550}" Sep 9 05:12:20.909108 containerd[2018]: time="2025-09-09T05:12:20.908684126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8\" id:\"1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8\" pid:4118 exited_at:{seconds:1757394740 nanos:907339550}" Sep 9 05:12:20.996981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d2310d0bd42eaa7555406c759536d8a81d0aff3e30d3f1ae268126ff88ca9e8-rootfs.mount: Deactivated successfully. Sep 9 05:12:22.033438 kubelet[3513]: E0909 05:12:22.033210 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxj2z" podUID="76a55f58-17dd-4feb-9f04-7ed611b8fe4a" Sep 9 05:12:23.247709 containerd[2018]: time="2025-09-09T05:12:23.246566726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:23.248286 containerd[2018]: time="2025-09-09T05:12:23.248064098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 9 05:12:23.249111 containerd[2018]: time="2025-09-09T05:12:23.249068270Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:23.254407 containerd[2018]: time="2025-09-09T05:12:23.254351174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:23.255664 containerd[2018]: time="2025-09-09T05:12:23.255606314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.661370465s" Sep 9 05:12:23.255795 containerd[2018]: time="2025-09-09T05:12:23.255661814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 05:12:23.258686 containerd[2018]: time="2025-09-09T05:12:23.258621734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 05:12:23.284008 containerd[2018]: time="2025-09-09T05:12:23.283872362Z" level=info msg="CreateContainer within sandbox \"a5aaafe1b702b6441f05cd6ff2e061f81b2721b042907bdb0b4ce640298d70bd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 05:12:23.302049 containerd[2018]: time="2025-09-09T05:12:23.300527114Z" level=info msg="Container d723d05efeed264f238143cd964ac2b928a449d3a76a751c9867d82550df97b5: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:23.314422 containerd[2018]: time="2025-09-09T05:12:23.314335550Z" level=info msg="CreateContainer within sandbox \"a5aaafe1b702b6441f05cd6ff2e061f81b2721b042907bdb0b4ce640298d70bd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d723d05efeed264f238143cd964ac2b928a449d3a76a751c9867d82550df97b5\"" Sep 9 05:12:23.315429 containerd[2018]: time="2025-09-09T05:12:23.315362906Z" level=info msg="StartContainer for \"d723d05efeed264f238143cd964ac2b928a449d3a76a751c9867d82550df97b5\"" Sep 9 05:12:23.320942 containerd[2018]: time="2025-09-09T05:12:23.320853134Z" level=info msg="connecting to shim d723d05efeed264f238143cd964ac2b928a449d3a76a751c9867d82550df97b5" address="unix:///run/containerd/s/0bff0c953aa99c035412bf08cc0821cbe76efbe809939a1fe98f58367770cc00" protocol=ttrpc version=3 Sep 9 05:12:23.360479 systemd[1]: Started cri-containerd-d723d05efeed264f238143cd964ac2b928a449d3a76a751c9867d82550df97b5.scope - libcontainer container d723d05efeed264f238143cd964ac2b928a449d3a76a751c9867d82550df97b5. Sep 9 05:12:23.444265 containerd[2018]: time="2025-09-09T05:12:23.444102495Z" level=info msg="StartContainer for \"d723d05efeed264f238143cd964ac2b928a449d3a76a751c9867d82550df97b5\" returns successfully" Sep 9 05:12:24.033989 kubelet[3513]: E0909 05:12:24.033467 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxj2z" podUID="76a55f58-17dd-4feb-9f04-7ed611b8fe4a" Sep 9 05:12:24.337660 kubelet[3513]: I0909 05:12:24.337307 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7487954698-cnjq4" podStartSLOduration=2.761841492 podStartE2EDuration="7.337283043s" podCreationTimestamp="2025-09-09 05:12:17 +0000 UTC" firstStartedPulling="2025-09-09 05:12:18.681927971 +0000 UTC m=+29.875145225" lastFinishedPulling="2025-09-09 05:12:23.257369498 +0000 UTC m=+34.450586776" observedRunningTime="2025-09-09 05:12:24.335620755 +0000 UTC m=+35.528838021" watchObservedRunningTime="2025-09-09 05:12:24.337283043 +0000 UTC m=+35.530500297" Sep 9 05:12:26.032283 kubelet[3513]: E0909 05:12:26.032218 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxj2z" podUID="76a55f58-17dd-4feb-9f04-7ed611b8fe4a" Sep 9 05:12:27.024884 containerd[2018]: time="2025-09-09T05:12:27.024810389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:27.027582 containerd[2018]: time="2025-09-09T05:12:27.027097589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 05:12:27.028449 containerd[2018]: time="2025-09-09T05:12:27.028409957Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:27.038678 containerd[2018]: time="2025-09-09T05:12:27.038612645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:27.040957 containerd[2018]: time="2025-09-09T05:12:27.040902521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.782215375s" Sep 9 05:12:27.041516 containerd[2018]: time="2025-09-09T05:12:27.041150921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 05:12:27.047382 containerd[2018]: time="2025-09-09T05:12:27.047336201Z" level=info msg="CreateContainer within sandbox \"daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 05:12:27.063728 containerd[2018]: time="2025-09-09T05:12:27.063325085Z" level=info msg="Container c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:27.077509 containerd[2018]: time="2025-09-09T05:12:27.077446385Z" level=info msg="CreateContainer within sandbox \"daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45\"" Sep 9 05:12:27.080744 containerd[2018]: time="2025-09-09T05:12:27.080642513Z" level=info msg="StartContainer for \"c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45\"" Sep 9 05:12:27.084969 containerd[2018]: time="2025-09-09T05:12:27.084894653Z" level=info msg="connecting to shim c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45" address="unix:///run/containerd/s/a0f6bce7479ab4ce7e68611a9f89f2b01ab0143a918e8521916ea79c8422e2b4" protocol=ttrpc version=3 Sep 9 05:12:27.125334 systemd[1]: Started cri-containerd-c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45.scope - libcontainer container c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45. Sep 9 05:12:27.232219 containerd[2018]: time="2025-09-09T05:12:27.231982434Z" level=info msg="StartContainer for \"c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45\" returns successfully" Sep 9 05:12:28.033265 kubelet[3513]: E0909 05:12:28.031779 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vxj2z" podUID="76a55f58-17dd-4feb-9f04-7ed611b8fe4a" Sep 9 05:12:28.107795 containerd[2018]: time="2025-09-09T05:12:28.107725890Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:12:28.112197 systemd[1]: cri-containerd-c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45.scope: Deactivated successfully. Sep 9 05:12:28.114143 systemd[1]: cri-containerd-c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45.scope: Consumed 899ms CPU time, 186.7M memory peak, 165.8M written to disk. Sep 9 05:12:28.119608 containerd[2018]: time="2025-09-09T05:12:28.119435850Z" level=info msg="received exit event container_id:\"c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45\" id:\"c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45\" pid:4224 exited_at:{seconds:1757394748 nanos:118820718}" Sep 9 05:12:28.119608 containerd[2018]: time="2025-09-09T05:12:28.119516262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45\" id:\"c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45\" pid:4224 exited_at:{seconds:1757394748 nanos:118820718}" Sep 9 05:12:28.161155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c88b30442378ab764bdb75410635adbc01df3484b6ae80619cb7d6a13382da45-rootfs.mount: Deactivated successfully. Sep 9 05:12:28.203499 kubelet[3513]: I0909 05:12:28.203425 3513 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:12:28.294228 systemd[1]: Created slice kubepods-burstable-pod8ece1b83_f963_4812_a6b3_c979112e2a60.slice - libcontainer container kubepods-burstable-pod8ece1b83_f963_4812_a6b3_c979112e2a60.slice. Sep 9 05:12:28.317643 systemd[1]: Created slice kubepods-burstable-pod905712ae_cbe1_43ba_90ca_063ce5a5f097.slice - libcontainer container kubepods-burstable-pod905712ae_cbe1_43ba_90ca_063ce5a5f097.slice. Sep 9 05:12:28.337647 systemd[1]: Created slice kubepods-besteffort-podffa7159d_e0e4_43bd_82bf_866f96c95848.slice - libcontainer container kubepods-besteffort-podffa7159d_e0e4_43bd_82bf_866f96c95848.slice. Sep 9 05:12:28.346481 kubelet[3513]: I0909 05:12:28.346404 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnr4v\" (UniqueName: \"kubernetes.io/projected/8ece1b83-f963-4812-a6b3-c979112e2a60-kube-api-access-cnr4v\") pod \"coredns-668d6bf9bc-zwrkc\" (UID: \"8ece1b83-f963-4812-a6b3-c979112e2a60\") " pod="kube-system/coredns-668d6bf9bc-zwrkc" Sep 9 05:12:28.347877 kubelet[3513]: I0909 05:12:28.346487 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ece1b83-f963-4812-a6b3-c979112e2a60-config-volume\") pod \"coredns-668d6bf9bc-zwrkc\" (UID: \"8ece1b83-f963-4812-a6b3-c979112e2a60\") " pod="kube-system/coredns-668d6bf9bc-zwrkc" Sep 9 05:12:28.347877 kubelet[3513]: I0909 05:12:28.346541 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7c27\" (UniqueName: \"kubernetes.io/projected/905712ae-cbe1-43ba-90ca-063ce5a5f097-kube-api-access-x7c27\") pod \"coredns-668d6bf9bc-9wd6t\" (UID: \"905712ae-cbe1-43ba-90ca-063ce5a5f097\") " pod="kube-system/coredns-668d6bf9bc-9wd6t" Sep 9 05:12:28.347877 kubelet[3513]: I0909 05:12:28.346584 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/905712ae-cbe1-43ba-90ca-063ce5a5f097-config-volume\") pod \"coredns-668d6bf9bc-9wd6t\" (UID: \"905712ae-cbe1-43ba-90ca-063ce5a5f097\") " pod="kube-system/coredns-668d6bf9bc-9wd6t" Sep 9 05:12:28.364367 kubelet[3513]: W0909 05:12:28.363979 3513 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-30-120" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-120' and this object Sep 9 05:12:28.364367 kubelet[3513]: E0909 05:12:28.364064 3513 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-30-120\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-30-120' and this object" logger="UnhandledError" Sep 9 05:12:28.364367 kubelet[3513]: W0909 05:12:28.364153 3513 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-30-120" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-120' and this object Sep 9 05:12:28.364367 kubelet[3513]: E0909 05:12:28.364188 3513 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-30-120\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-30-120' and this object" logger="UnhandledError" Sep 9 05:12:28.371635 systemd[1]: Created slice kubepods-besteffort-poda3835d68_456d_40aa_b365_a8f60a8cd66e.slice - libcontainer container kubepods-besteffort-poda3835d68_456d_40aa_b365_a8f60a8cd66e.slice. Sep 9 05:12:28.399502 systemd[1]: Created slice kubepods-besteffort-pod07d7cd1a_3a9d_482d_8637_67812c0449b7.slice - libcontainer container kubepods-besteffort-pod07d7cd1a_3a9d_482d_8637_67812c0449b7.slice. Sep 9 05:12:28.423640 systemd[1]: Created slice kubepods-besteffort-pod1164d331_715f_49ef_b981_2d35b8f156b6.slice - libcontainer container kubepods-besteffort-pod1164d331_715f_49ef_b981_2d35b8f156b6.slice. Sep 9 05:12:28.439128 systemd[1]: Created slice kubepods-besteffort-pod4d67339e_bbba_44a4_b8d7_f30c051b16ee.slice - libcontainer container kubepods-besteffort-pod4d67339e_bbba_44a4_b8d7_f30c051b16ee.slice. Sep 9 05:12:28.447249 kubelet[3513]: I0909 05:12:28.446999 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d67339e-bbba-44a4-b8d7-f30c051b16ee-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-fwmcq\" (UID: \"4d67339e-bbba-44a4-b8d7-f30c051b16ee\") " pod="calico-system/goldmane-54d579b49d-fwmcq" Sep 9 05:12:28.448249 kubelet[3513]: I0909 05:12:28.447110 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb9k\" (UniqueName: \"kubernetes.io/projected/ffa7159d-e0e4-43bd-82bf-866f96c95848-kube-api-access-8zb9k\") pod \"whisker-84c44bcdf9-kg9sz\" (UID: \"ffa7159d-e0e4-43bd-82bf-866f96c95848\") " pod="calico-system/whisker-84c44bcdf9-kg9sz" Sep 9 05:12:28.448249 kubelet[3513]: I0909 05:12:28.448216 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a3835d68-456d-40aa-b365-a8f60a8cd66e-calico-apiserver-certs\") pod \"calico-apiserver-5fc8557bf8-kxfd4\" (UID: \"a3835d68-456d-40aa-b365-a8f60a8cd66e\") " pod="calico-apiserver/calico-apiserver-5fc8557bf8-kxfd4" Sep 9 05:12:28.448534 kubelet[3513]: I0909 05:12:28.448381 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07d7cd1a-3a9d-482d-8637-67812c0449b7-tigera-ca-bundle\") pod \"calico-kube-controllers-6db7d4b486-nw48g\" (UID: \"07d7cd1a-3a9d-482d-8637-67812c0449b7\") " pod="calico-system/calico-kube-controllers-6db7d4b486-nw48g" Sep 9 05:12:28.448534 kubelet[3513]: I0909 05:12:28.448463 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1164d331-715f-49ef-b981-2d35b8f156b6-calico-apiserver-certs\") pod \"calico-apiserver-5fc8557bf8-m59gl\" (UID: \"1164d331-715f-49ef-b981-2d35b8f156b6\") " pod="calico-apiserver/calico-apiserver-5fc8557bf8-m59gl" Sep 9 05:12:28.448658 kubelet[3513]: I0909 05:12:28.448532 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d67339e-bbba-44a4-b8d7-f30c051b16ee-config\") pod \"goldmane-54d579b49d-fwmcq\" (UID: \"4d67339e-bbba-44a4-b8d7-f30c051b16ee\") " pod="calico-system/goldmane-54d579b49d-fwmcq" Sep 9 05:12:28.448658 kubelet[3513]: I0909 05:12:28.448643 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-backend-key-pair\") pod \"whisker-84c44bcdf9-kg9sz\" (UID: \"ffa7159d-e0e4-43bd-82bf-866f96c95848\") " pod="calico-system/whisker-84c44bcdf9-kg9sz" Sep 9 05:12:28.449360 kubelet[3513]: I0909 05:12:28.448750 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm48b\" (UniqueName: \"kubernetes.io/projected/a3835d68-456d-40aa-b365-a8f60a8cd66e-kube-api-access-pm48b\") pod \"calico-apiserver-5fc8557bf8-kxfd4\" (UID: \"a3835d68-456d-40aa-b365-a8f60a8cd66e\") " pod="calico-apiserver/calico-apiserver-5fc8557bf8-kxfd4" Sep 9 05:12:28.449360 kubelet[3513]: I0909 05:12:28.449046 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z4mq\" (UniqueName: \"kubernetes.io/projected/4d67339e-bbba-44a4-b8d7-f30c051b16ee-kube-api-access-5z4mq\") pod \"goldmane-54d579b49d-fwmcq\" (UID: \"4d67339e-bbba-44a4-b8d7-f30c051b16ee\") " pod="calico-system/goldmane-54d579b49d-fwmcq" Sep 9 05:12:28.449360 kubelet[3513]: I0909 05:12:28.449201 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4rb\" (UniqueName: \"kubernetes.io/projected/1164d331-715f-49ef-b981-2d35b8f156b6-kube-api-access-8q4rb\") pod \"calico-apiserver-5fc8557bf8-m59gl\" (UID: \"1164d331-715f-49ef-b981-2d35b8f156b6\") " pod="calico-apiserver/calico-apiserver-5fc8557bf8-m59gl" Sep 9 05:12:28.449360 kubelet[3513]: I0909 05:12:28.449313 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgm76\" (UniqueName: \"kubernetes.io/projected/07d7cd1a-3a9d-482d-8637-67812c0449b7-kube-api-access-hgm76\") pod \"calico-kube-controllers-6db7d4b486-nw48g\" (UID: \"07d7cd1a-3a9d-482d-8637-67812c0449b7\") " pod="calico-system/calico-kube-controllers-6db7d4b486-nw48g" Sep 9 05:12:28.449780 kubelet[3513]: I0909 05:12:28.449552 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d67339e-bbba-44a4-b8d7-f30c051b16ee-goldmane-key-pair\") pod \"goldmane-54d579b49d-fwmcq\" (UID: \"4d67339e-bbba-44a4-b8d7-f30c051b16ee\") " pod="calico-system/goldmane-54d579b49d-fwmcq" Sep 9 05:12:28.449780 kubelet[3513]: I0909 05:12:28.449624 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-ca-bundle\") pod \"whisker-84c44bcdf9-kg9sz\" (UID: \"ffa7159d-e0e4-43bd-82bf-866f96c95848\") " pod="calico-system/whisker-84c44bcdf9-kg9sz" Sep 9 05:12:28.637588 containerd[2018]: time="2025-09-09T05:12:28.636459381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwrkc,Uid:8ece1b83-f963-4812-a6b3-c979112e2a60,Namespace:kube-system,Attempt:0,}" Sep 9 05:12:28.637588 containerd[2018]: time="2025-09-09T05:12:28.636461445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wd6t,Uid:905712ae-cbe1-43ba-90ca-063ce5a5f097,Namespace:kube-system,Attempt:0,}" Sep 9 05:12:28.650600 containerd[2018]: time="2025-09-09T05:12:28.650370717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c44bcdf9-kg9sz,Uid:ffa7159d-e0e4-43bd-82bf-866f96c95848,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:28.688523 containerd[2018]: time="2025-09-09T05:12:28.688465329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-kxfd4,Uid:a3835d68-456d-40aa-b365-a8f60a8cd66e,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:12:28.711295 containerd[2018]: time="2025-09-09T05:12:28.711226449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db7d4b486-nw48g,Uid:07d7cd1a-3a9d-482d-8637-67812c0449b7,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:28.741387 containerd[2018]: time="2025-09-09T05:12:28.741319833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-m59gl,Uid:1164d331-715f-49ef-b981-2d35b8f156b6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:12:29.040186 containerd[2018]: time="2025-09-09T05:12:29.040098931Z" level=error msg="Failed to destroy network for sandbox \"351cf2134427bd8772e3fd0402a88c39e008c09dd60d46cae080d02680abae7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.301088 containerd[2018]: time="2025-09-09T05:12:29.300499172Z" level=error msg="Failed to destroy network for sandbox \"4d6da879e82da1fd38bec433514a5f7504329a98a3739723a5f5426c2a48c6b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.306721 systemd[1]: run-netns-cni\x2de5df1a02\x2dbf28\x2d324a\x2df96f\x2dca93ac739740.mount: Deactivated successfully. Sep 9 05:12:29.387049 containerd[2018]: time="2025-09-09T05:12:29.386761820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwrkc,Uid:8ece1b83-f963-4812-a6b3-c979112e2a60,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"351cf2134427bd8772e3fd0402a88c39e008c09dd60d46cae080d02680abae7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.387841 kubelet[3513]: E0909 05:12:29.387768 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"351cf2134427bd8772e3fd0402a88c39e008c09dd60d46cae080d02680abae7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.388494 kubelet[3513]: E0909 05:12:29.387878 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"351cf2134427bd8772e3fd0402a88c39e008c09dd60d46cae080d02680abae7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwrkc" Sep 9 05:12:29.388494 kubelet[3513]: E0909 05:12:29.387915 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"351cf2134427bd8772e3fd0402a88c39e008c09dd60d46cae080d02680abae7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zwrkc" Sep 9 05:12:29.388494 kubelet[3513]: E0909 05:12:29.387990 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zwrkc_kube-system(8ece1b83-f963-4812-a6b3-c979112e2a60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zwrkc_kube-system(8ece1b83-f963-4812-a6b3-c979112e2a60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"351cf2134427bd8772e3fd0402a88c39e008c09dd60d46cae080d02680abae7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zwrkc" podUID="8ece1b83-f963-4812-a6b3-c979112e2a60" Sep 9 05:12:29.404868 containerd[2018]: time="2025-09-09T05:12:29.404549528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wd6t,Uid:905712ae-cbe1-43ba-90ca-063ce5a5f097,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6da879e82da1fd38bec433514a5f7504329a98a3739723a5f5426c2a48c6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.406332 kubelet[3513]: E0909 05:12:29.405134 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6da879e82da1fd38bec433514a5f7504329a98a3739723a5f5426c2a48c6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.406332 kubelet[3513]: E0909 05:12:29.405214 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6da879e82da1fd38bec433514a5f7504329a98a3739723a5f5426c2a48c6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9wd6t" Sep 9 05:12:29.406332 kubelet[3513]: E0909 05:12:29.405247 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6da879e82da1fd38bec433514a5f7504329a98a3739723a5f5426c2a48c6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9wd6t" Sep 9 05:12:29.406567 kubelet[3513]: E0909 05:12:29.405341 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9wd6t_kube-system(905712ae-cbe1-43ba-90ca-063ce5a5f097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9wd6t_kube-system(905712ae-cbe1-43ba-90ca-063ce5a5f097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d6da879e82da1fd38bec433514a5f7504329a98a3739723a5f5426c2a48c6b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9wd6t" podUID="905712ae-cbe1-43ba-90ca-063ce5a5f097" Sep 9 05:12:29.557107 kubelet[3513]: E0909 05:12:29.555604 3513 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:12:29.557107 kubelet[3513]: E0909 05:12:29.555734 3513 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d67339e-bbba-44a4-b8d7-f30c051b16ee-goldmane-ca-bundle podName:4d67339e-bbba-44a4-b8d7-f30c051b16ee nodeName:}" failed. No retries permitted until 2025-09-09 05:12:30.055704169 +0000 UTC m=+41.248921423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/4d67339e-bbba-44a4-b8d7-f30c051b16ee-goldmane-ca-bundle") pod "goldmane-54d579b49d-fwmcq" (UID: "4d67339e-bbba-44a4-b8d7-f30c051b16ee") : failed to sync configmap cache: timed out waiting for the condition Sep 9 05:12:29.622532 containerd[2018]: time="2025-09-09T05:12:29.622449082Z" level=error msg="Failed to destroy network for sandbox \"d71b3fe922573bc82e1799c7eb554512f1cfb7876d91212cf271ed7500ffb207\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.627510 containerd[2018]: time="2025-09-09T05:12:29.627425566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c44bcdf9-kg9sz,Uid:ffa7159d-e0e4-43bd-82bf-866f96c95848,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71b3fe922573bc82e1799c7eb554512f1cfb7876d91212cf271ed7500ffb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.628474 kubelet[3513]: E0909 05:12:29.628258 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71b3fe922573bc82e1799c7eb554512f1cfb7876d91212cf271ed7500ffb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.628474 kubelet[3513]: E0909 05:12:29.628359 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71b3fe922573bc82e1799c7eb554512f1cfb7876d91212cf271ed7500ffb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c44bcdf9-kg9sz" Sep 9 05:12:29.628474 kubelet[3513]: E0909 05:12:29.628394 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71b3fe922573bc82e1799c7eb554512f1cfb7876d91212cf271ed7500ffb207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c44bcdf9-kg9sz" Sep 9 05:12:29.630349 kubelet[3513]: E0909 05:12:29.628474 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84c44bcdf9-kg9sz_calico-system(ffa7159d-e0e4-43bd-82bf-866f96c95848)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84c44bcdf9-kg9sz_calico-system(ffa7159d-e0e4-43bd-82bf-866f96c95848)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d71b3fe922573bc82e1799c7eb554512f1cfb7876d91212cf271ed7500ffb207\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84c44bcdf9-kg9sz" podUID="ffa7159d-e0e4-43bd-82bf-866f96c95848" Sep 9 05:12:29.628990 systemd[1]: run-netns-cni\x2d958dc0a3\x2dd986\x2db022\x2d9262\x2df26f8e6def79.mount: Deactivated successfully. Sep 9 05:12:29.663930 containerd[2018]: time="2025-09-09T05:12:29.663698434Z" level=error msg="Failed to destroy network for sandbox \"732375718f645b3d6e0f99a5da982e00734adb8372980e1a35a7cfd32f299b56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.665677 containerd[2018]: time="2025-09-09T05:12:29.665621074Z" level=error msg="Failed to destroy network for sandbox \"2ca3acc3c9b43c662d745a975739a5babd65fa95d83a27eb8cd51c2e70dbe51a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.667977 containerd[2018]: time="2025-09-09T05:12:29.667628914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db7d4b486-nw48g,Uid:07d7cd1a-3a9d-482d-8637-67812c0449b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"732375718f645b3d6e0f99a5da982e00734adb8372980e1a35a7cfd32f299b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.668721 kubelet[3513]: E0909 05:12:29.668621 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732375718f645b3d6e0f99a5da982e00734adb8372980e1a35a7cfd32f299b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.668957 kubelet[3513]: E0909 05:12:29.668722 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732375718f645b3d6e0f99a5da982e00734adb8372980e1a35a7cfd32f299b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db7d4b486-nw48g" Sep 9 05:12:29.668957 kubelet[3513]: E0909 05:12:29.668765 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732375718f645b3d6e0f99a5da982e00734adb8372980e1a35a7cfd32f299b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db7d4b486-nw48g" Sep 9 05:12:29.668957 kubelet[3513]: E0909 05:12:29.668835 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6db7d4b486-nw48g_calico-system(07d7cd1a-3a9d-482d-8637-67812c0449b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6db7d4b486-nw48g_calico-system(07d7cd1a-3a9d-482d-8637-67812c0449b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"732375718f645b3d6e0f99a5da982e00734adb8372980e1a35a7cfd32f299b56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6db7d4b486-nw48g" podUID="07d7cd1a-3a9d-482d-8637-67812c0449b7" Sep 9 05:12:29.670974 containerd[2018]: time="2025-09-09T05:12:29.670706350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-kxfd4,Uid:a3835d68-456d-40aa-b365-a8f60a8cd66e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca3acc3c9b43c662d745a975739a5babd65fa95d83a27eb8cd51c2e70dbe51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.671989 containerd[2018]: time="2025-09-09T05:12:29.671005582Z" level=error msg="Failed to destroy network for sandbox \"0e44f431cee2f9d0acc8814a9f7aa67dd4d79be9b441b150f090d62b8aa0405e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.672708 kubelet[3513]: E0909 05:12:29.672543 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca3acc3c9b43c662d745a975739a5babd65fa95d83a27eb8cd51c2e70dbe51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.672708 kubelet[3513]: E0909 05:12:29.672625 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca3acc3c9b43c662d745a975739a5babd65fa95d83a27eb8cd51c2e70dbe51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc8557bf8-kxfd4" Sep 9 05:12:29.672708 kubelet[3513]: E0909 05:12:29.672661 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ca3acc3c9b43c662d745a975739a5babd65fa95d83a27eb8cd51c2e70dbe51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc8557bf8-kxfd4" Sep 9 05:12:29.672937 kubelet[3513]: E0909 05:12:29.672800 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc8557bf8-kxfd4_calico-apiserver(a3835d68-456d-40aa-b365-a8f60a8cd66e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc8557bf8-kxfd4_calico-apiserver(a3835d68-456d-40aa-b365-a8f60a8cd66e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ca3acc3c9b43c662d745a975739a5babd65fa95d83a27eb8cd51c2e70dbe51a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc8557bf8-kxfd4" podUID="a3835d68-456d-40aa-b365-a8f60a8cd66e" Sep 9 05:12:29.674806 containerd[2018]: time="2025-09-09T05:12:29.674727298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-m59gl,Uid:1164d331-715f-49ef-b981-2d35b8f156b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44f431cee2f9d0acc8814a9f7aa67dd4d79be9b441b150f090d62b8aa0405e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.675669 kubelet[3513]: E0909 05:12:29.675546 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44f431cee2f9d0acc8814a9f7aa67dd4d79be9b441b150f090d62b8aa0405e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:29.676121 kubelet[3513]: E0909 05:12:29.675815 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44f431cee2f9d0acc8814a9f7aa67dd4d79be9b441b150f090d62b8aa0405e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc8557bf8-m59gl" Sep 9 05:12:29.676121 kubelet[3513]: E0909 05:12:29.675983 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e44f431cee2f9d0acc8814a9f7aa67dd4d79be9b441b150f090d62b8aa0405e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fc8557bf8-m59gl" Sep 9 05:12:29.676700 kubelet[3513]: E0909 05:12:29.676489 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fc8557bf8-m59gl_calico-apiserver(1164d331-715f-49ef-b981-2d35b8f156b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fc8557bf8-m59gl_calico-apiserver(1164d331-715f-49ef-b981-2d35b8f156b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e44f431cee2f9d0acc8814a9f7aa67dd4d79be9b441b150f090d62b8aa0405e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fc8557bf8-m59gl" podUID="1164d331-715f-49ef-b981-2d35b8f156b6" Sep 9 05:12:30.043943 systemd[1]: Created slice kubepods-besteffort-pod76a55f58_17dd_4feb_9f04_7ed611b8fe4a.slice - libcontainer container kubepods-besteffort-pod76a55f58_17dd_4feb_9f04_7ed611b8fe4a.slice. Sep 9 05:12:30.055763 containerd[2018]: time="2025-09-09T05:12:30.055400684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxj2z,Uid:76a55f58-17dd-4feb-9f04-7ed611b8fe4a,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:30.143810 containerd[2018]: time="2025-09-09T05:12:30.143741216Z" level=error msg="Failed to destroy network for sandbox \"5e3545c4f13d5d8774b171faad0e87c5cfb34585cc84a453947e06f2c43818f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:30.146634 containerd[2018]: time="2025-09-09T05:12:30.146488112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxj2z,Uid:76a55f58-17dd-4feb-9f04-7ed611b8fe4a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3545c4f13d5d8774b171faad0e87c5cfb34585cc84a453947e06f2c43818f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:30.147217 kubelet[3513]: E0909 05:12:30.147004 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3545c4f13d5d8774b171faad0e87c5cfb34585cc84a453947e06f2c43818f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:30.147358 kubelet[3513]: E0909 05:12:30.147328 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3545c4f13d5d8774b171faad0e87c5cfb34585cc84a453947e06f2c43818f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vxj2z" Sep 9 05:12:30.147561 kubelet[3513]: E0909 05:12:30.147425 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3545c4f13d5d8774b171faad0e87c5cfb34585cc84a453947e06f2c43818f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vxj2z" Sep 9 05:12:30.147746 kubelet[3513]: E0909 05:12:30.147682 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vxj2z_calico-system(76a55f58-17dd-4feb-9f04-7ed611b8fe4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vxj2z_calico-system(76a55f58-17dd-4feb-9f04-7ed611b8fe4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e3545c4f13d5d8774b171faad0e87c5cfb34585cc84a453947e06f2c43818f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vxj2z" podUID="76a55f58-17dd-4feb-9f04-7ed611b8fe4a" Sep 9 05:12:30.162405 systemd[1]: run-netns-cni\x2d1a1d322b\x2d7c32\x2de5c6\x2d5b92\x2db7fb0b0aa87e.mount: Deactivated successfully. Sep 9 05:12:30.162753 systemd[1]: run-netns-cni\x2d6fd4256d\x2d52b7\x2d5e46\x2d36d2\x2d30bb8c47f030.mount: Deactivated successfully. Sep 9 05:12:30.163065 systemd[1]: run-netns-cni\x2d27a801f1\x2d932e\x2d8075\x2d827e\x2d42aa43f9b0ec.mount: Deactivated successfully. Sep 9 05:12:30.249422 containerd[2018]: time="2025-09-09T05:12:30.249309813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fwmcq,Uid:4d67339e-bbba-44a4-b8d7-f30c051b16ee,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:30.342787 containerd[2018]: time="2025-09-09T05:12:30.342599973Z" level=error msg="Failed to destroy network for sandbox \"6c1e4cebfa4170cf5da0ee9322e935ab2dc7b0a7a44698c30498bd3e60d5d266\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:30.348977 systemd[1]: run-netns-cni\x2d20eb4688\x2d2afd\x2dd0d0\x2d7ee8\x2dd57f92575a9a.mount: Deactivated successfully. Sep 9 05:12:30.349608 containerd[2018]: time="2025-09-09T05:12:30.349531785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fwmcq,Uid:4d67339e-bbba-44a4-b8d7-f30c051b16ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1e4cebfa4170cf5da0ee9322e935ab2dc7b0a7a44698c30498bd3e60d5d266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:30.350322 kubelet[3513]: E0909 05:12:30.350260 3513 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1e4cebfa4170cf5da0ee9322e935ab2dc7b0a7a44698c30498bd3e60d5d266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:12:30.351338 kubelet[3513]: E0909 05:12:30.350450 3513 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1e4cebfa4170cf5da0ee9322e935ab2dc7b0a7a44698c30498bd3e60d5d266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-fwmcq" Sep 9 05:12:30.351338 kubelet[3513]: E0909 05:12:30.350616 3513 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1e4cebfa4170cf5da0ee9322e935ab2dc7b0a7a44698c30498bd3e60d5d266\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-fwmcq" Sep 9 05:12:30.351338 kubelet[3513]: E0909 05:12:30.351010 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-fwmcq_calico-system(4d67339e-bbba-44a4-b8d7-f30c051b16ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-fwmcq_calico-system(4d67339e-bbba-44a4-b8d7-f30c051b16ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c1e4cebfa4170cf5da0ee9322e935ab2dc7b0a7a44698c30498bd3e60d5d266\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-fwmcq" podUID="4d67339e-bbba-44a4-b8d7-f30c051b16ee" Sep 9 05:12:30.365251 containerd[2018]: time="2025-09-09T05:12:30.365192241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 05:12:38.449118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965116870.mount: Deactivated successfully. Sep 9 05:12:38.517655 containerd[2018]: time="2025-09-09T05:12:38.517561098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:38.520088 containerd[2018]: time="2025-09-09T05:12:38.519593694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 05:12:38.522424 containerd[2018]: time="2025-09-09T05:12:38.522230526Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:38.527243 containerd[2018]: time="2025-09-09T05:12:38.526576266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:38.528228 containerd[2018]: time="2025-09-09T05:12:38.528177618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 8.162920589s" Sep 9 05:12:38.528402 containerd[2018]: time="2025-09-09T05:12:38.528374250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 05:12:38.564248 containerd[2018]: time="2025-09-09T05:12:38.563988450Z" level=info msg="CreateContainer within sandbox \"daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 05:12:38.595047 containerd[2018]: time="2025-09-09T05:12:38.593486178Z" level=info msg="Container 8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:38.632111 containerd[2018]: time="2025-09-09T05:12:38.631678686Z" level=info msg="CreateContainer within sandbox \"daea705f30b3d432d9837d84a4438b57dd84a1e3084e6f7199d5e872fdc291a5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\"" Sep 9 05:12:38.635153 containerd[2018]: time="2025-09-09T05:12:38.634095378Z" level=info msg="StartContainer for \"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\"" Sep 9 05:12:38.640500 containerd[2018]: time="2025-09-09T05:12:38.640431654Z" level=info msg="connecting to shim 8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27" address="unix:///run/containerd/s/a0f6bce7479ab4ce7e68611a9f89f2b01ab0143a918e8521916ea79c8422e2b4" protocol=ttrpc version=3 Sep 9 05:12:38.697420 systemd[1]: Started cri-containerd-8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27.scope - libcontainer container 8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27. Sep 9 05:12:38.844170 containerd[2018]: time="2025-09-09T05:12:38.843832303Z" level=info msg="StartContainer for \"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\" returns successfully" Sep 9 05:12:39.104550 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 05:12:39.104714 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 05:12:39.438083 kubelet[3513]: I0909 05:12:39.437790 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zb9k\" (UniqueName: \"kubernetes.io/projected/ffa7159d-e0e4-43bd-82bf-866f96c95848-kube-api-access-8zb9k\") pod \"ffa7159d-e0e4-43bd-82bf-866f96c95848\" (UID: \"ffa7159d-e0e4-43bd-82bf-866f96c95848\") " Sep 9 05:12:39.438083 kubelet[3513]: I0909 05:12:39.437953 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-backend-key-pair\") pod \"ffa7159d-e0e4-43bd-82bf-866f96c95848\" (UID: \"ffa7159d-e0e4-43bd-82bf-866f96c95848\") " Sep 9 05:12:39.438809 kubelet[3513]: I0909 05:12:39.438139 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-ca-bundle\") pod \"ffa7159d-e0e4-43bd-82bf-866f96c95848\" (UID: \"ffa7159d-e0e4-43bd-82bf-866f96c95848\") " Sep 9 05:12:39.440048 kubelet[3513]: I0909 05:12:39.439720 3513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ffa7159d-e0e4-43bd-82bf-866f96c95848" (UID: "ffa7159d-e0e4-43bd-82bf-866f96c95848"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:12:39.452597 kubelet[3513]: I0909 05:12:39.452139 3513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa7159d-e0e4-43bd-82bf-866f96c95848-kube-api-access-8zb9k" (OuterVolumeSpecName: "kube-api-access-8zb9k") pod "ffa7159d-e0e4-43bd-82bf-866f96c95848" (UID: "ffa7159d-e0e4-43bd-82bf-866f96c95848"). InnerVolumeSpecName "kube-api-access-8zb9k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:12:39.458133 kubelet[3513]: I0909 05:12:39.457324 3513 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ffa7159d-e0e4-43bd-82bf-866f96c95848" (UID: "ffa7159d-e0e4-43bd-82bf-866f96c95848"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:12:39.463086 kubelet[3513]: I0909 05:12:39.461829 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bvhl9" podStartSLOduration=2.485030147 podStartE2EDuration="22.461806122s" podCreationTimestamp="2025-09-09 05:12:17 +0000 UTC" firstStartedPulling="2025-09-09 05:12:18.554343599 +0000 UTC m=+29.747560853" lastFinishedPulling="2025-09-09 05:12:38.531119562 +0000 UTC m=+49.724336828" observedRunningTime="2025-09-09 05:12:39.455842902 +0000 UTC m=+50.649060204" watchObservedRunningTime="2025-09-09 05:12:39.461806122 +0000 UTC m=+50.655023388" Sep 9 05:12:39.466515 systemd[1]: var-lib-kubelet-pods-ffa7159d\x2de0e4\x2d43bd\x2d82bf\x2d866f96c95848-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zb9k.mount: Deactivated successfully. Sep 9 05:12:39.466735 systemd[1]: var-lib-kubelet-pods-ffa7159d\x2de0e4\x2d43bd\x2d82bf\x2d866f96c95848-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 05:12:39.540091 kubelet[3513]: I0909 05:12:39.539354 3513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8zb9k\" (UniqueName: \"kubernetes.io/projected/ffa7159d-e0e4-43bd-82bf-866f96c95848-kube-api-access-8zb9k\") on node \"ip-172-31-30-120\" DevicePath \"\"" Sep 9 05:12:39.540091 kubelet[3513]: I0909 05:12:39.539428 3513 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-backend-key-pair\") on node \"ip-172-31-30-120\" DevicePath \"\"" Sep 9 05:12:39.540091 kubelet[3513]: I0909 05:12:39.539454 3513 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa7159d-e0e4-43bd-82bf-866f96c95848-whisker-ca-bundle\") on node \"ip-172-31-30-120\" DevicePath \"\"" Sep 9 05:12:39.727378 systemd[1]: Removed slice kubepods-besteffort-podffa7159d_e0e4_43bd_82bf_866f96c95848.slice - libcontainer container kubepods-besteffort-podffa7159d_e0e4_43bd_82bf_866f96c95848.slice. Sep 9 05:12:39.890008 systemd[1]: Created slice kubepods-besteffort-pod4595b790_1a7b_4111_ac50_21eacc320fe8.slice - libcontainer container kubepods-besteffort-pod4595b790_1a7b_4111_ac50_21eacc320fe8.slice. Sep 9 05:12:39.941500 kubelet[3513]: I0909 05:12:39.941433 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4595b790-1a7b-4111-ac50-21eacc320fe8-whisker-ca-bundle\") pod \"whisker-674f66d6bb-rkcxx\" (UID: \"4595b790-1a7b-4111-ac50-21eacc320fe8\") " pod="calico-system/whisker-674f66d6bb-rkcxx" Sep 9 05:12:39.941735 kubelet[3513]: I0909 05:12:39.941515 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4595b790-1a7b-4111-ac50-21eacc320fe8-whisker-backend-key-pair\") pod \"whisker-674f66d6bb-rkcxx\" (UID: \"4595b790-1a7b-4111-ac50-21eacc320fe8\") " pod="calico-system/whisker-674f66d6bb-rkcxx" Sep 9 05:12:39.941735 kubelet[3513]: I0909 05:12:39.941583 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdr82\" (UniqueName: \"kubernetes.io/projected/4595b790-1a7b-4111-ac50-21eacc320fe8-kube-api-access-kdr82\") pod \"whisker-674f66d6bb-rkcxx\" (UID: \"4595b790-1a7b-4111-ac50-21eacc320fe8\") " pod="calico-system/whisker-674f66d6bb-rkcxx" Sep 9 05:12:40.202342 containerd[2018]: time="2025-09-09T05:12:40.202274958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674f66d6bb-rkcxx,Uid:4595b790-1a7b-4111-ac50-21eacc320fe8,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:40.714445 systemd-networkd[1620]: calia5aa73c9881: Link UP Sep 9 05:12:40.716107 (udev-worker)[4515]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:12:40.716194 systemd-networkd[1620]: calia5aa73c9881: Gained carrier Sep 9 05:12:40.765560 containerd[2018]: 2025-09-09 05:12:40.314 [INFO][4600] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:12:40.765560 containerd[2018]: 2025-09-09 05:12:40.410 [INFO][4600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0 whisker-674f66d6bb- calico-system 4595b790-1a7b-4111-ac50-21eacc320fe8 893 0 2025-09-09 05:12:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:674f66d6bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-120 whisker-674f66d6bb-rkcxx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia5aa73c9881 [] [] }} ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-" Sep 9 05:12:40.765560 containerd[2018]: 2025-09-09 05:12:40.411 [INFO][4600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" Sep 9 05:12:40.765560 containerd[2018]: 2025-09-09 05:12:40.589 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" HandleID="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Workload="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.590 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" HandleID="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Workload="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b4620), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"whisker-674f66d6bb-rkcxx", "timestamp":"2025-09-09 05:12:40.589895348 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.590 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.590 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.593 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.617 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" host="ip-172-31-30-120" Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.638 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.651 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.655 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:40.765927 containerd[2018]: 2025-09-09 05:12:40.659 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:40.766471 containerd[2018]: 2025-09-09 05:12:40.659 [INFO][4643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" host="ip-172-31-30-120" Sep 9 05:12:40.766471 containerd[2018]: 2025-09-09 05:12:40.662 [INFO][4643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be Sep 9 05:12:40.766471 containerd[2018]: 2025-09-09 05:12:40.670 [INFO][4643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" host="ip-172-31-30-120" Sep 9 05:12:40.766471 containerd[2018]: 2025-09-09 05:12:40.680 [INFO][4643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.129/26] block=192.168.74.128/26 handle="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" host="ip-172-31-30-120" Sep 9 05:12:40.766471 containerd[2018]: 2025-09-09 05:12:40.680 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.129/26] handle="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" host="ip-172-31-30-120" Sep 9 05:12:40.766471 containerd[2018]: 2025-09-09 05:12:40.680 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:40.766471 containerd[2018]: 2025-09-09 05:12:40.680 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.129/26] IPv6=[] ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" HandleID="k8s-pod-network.c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Workload="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" Sep 9 05:12:40.766781 containerd[2018]: 2025-09-09 05:12:40.688 [INFO][4600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0", GenerateName:"whisker-674f66d6bb-", Namespace:"calico-system", SelfLink:"", UID:"4595b790-1a7b-4111-ac50-21eacc320fe8", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"674f66d6bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"whisker-674f66d6bb-rkcxx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5aa73c9881", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:40.766781 containerd[2018]: 2025-09-09 05:12:40.688 [INFO][4600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.129/32] ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" Sep 9 05:12:40.766953 containerd[2018]: 2025-09-09 05:12:40.689 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5aa73c9881 ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" Sep 9 05:12:40.766953 containerd[2018]: 2025-09-09 05:12:40.719 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" Sep 9 05:12:40.768302 containerd[2018]: 2025-09-09 05:12:40.723 [INFO][4600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0", GenerateName:"whisker-674f66d6bb-", Namespace:"calico-system", SelfLink:"", UID:"4595b790-1a7b-4111-ac50-21eacc320fe8", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"674f66d6bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be", Pod:"whisker-674f66d6bb-rkcxx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia5aa73c9881", MAC:"5a:05:91:12:eb:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:40.769666 containerd[2018]: 2025-09-09 05:12:40.749 [INFO][4600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" Namespace="calico-system" Pod="whisker-674f66d6bb-rkcxx" WorkloadEndpoint="ip--172--31--30--120-k8s-whisker--674f66d6bb--rkcxx-eth0" Sep 9 05:12:40.846429 containerd[2018]: time="2025-09-09T05:12:40.846283353Z" level=info msg="connecting to shim c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be" address="unix:///run/containerd/s/3c8f8f468d588d9e342a9b94b19cbd9920b790ed7399e65bafad9fcb3776c2b4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:40.949395 systemd[1]: Started cri-containerd-c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be.scope - libcontainer container c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be. Sep 9 05:12:41.034736 containerd[2018]: time="2025-09-09T05:12:41.034545450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db7d4b486-nw48g,Uid:07d7cd1a-3a9d-482d-8637-67812c0449b7,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:41.053498 kubelet[3513]: I0909 05:12:41.053335 3513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa7159d-e0e4-43bd-82bf-866f96c95848" path="/var/lib/kubelet/pods/ffa7159d-e0e4-43bd-82bf-866f96c95848/volumes" Sep 9 05:12:41.210682 containerd[2018]: time="2025-09-09T05:12:41.210468487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674f66d6bb-rkcxx,Uid:4595b790-1a7b-4111-ac50-21eacc320fe8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be\"" Sep 9 05:12:41.219483 containerd[2018]: time="2025-09-09T05:12:41.218282239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 05:12:41.422626 (udev-worker)[4516]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:12:41.430154 systemd-networkd[1620]: cali6f8b812d575: Link UP Sep 9 05:12:41.430603 systemd-networkd[1620]: cali6f8b812d575: Gained carrier Sep 9 05:12:41.479978 containerd[2018]: 2025-09-09 05:12:41.151 [INFO][4701] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:12:41.479978 containerd[2018]: 2025-09-09 05:12:41.201 [INFO][4701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0 calico-kube-controllers-6db7d4b486- calico-system 07d7cd1a-3a9d-482d-8637-67812c0449b7 818 0 2025-09-09 05:12:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6db7d4b486 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-120 calico-kube-controllers-6db7d4b486-nw48g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f8b812d575 [] [] }} ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-" Sep 9 05:12:41.479978 containerd[2018]: 2025-09-09 05:12:41.202 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" Sep 9 05:12:41.479978 containerd[2018]: 2025-09-09 05:12:41.301 [INFO][4734] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" HandleID="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Workload="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.301 [INFO][4734] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" HandleID="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Workload="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"calico-kube-controllers-6db7d4b486-nw48g", "timestamp":"2025-09-09 05:12:41.301379336 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.301 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.301 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.301 [INFO][4734] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.335 [INFO][4734] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" host="ip-172-31-30-120" Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.346 [INFO][4734] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.357 [INFO][4734] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.364 [INFO][4734] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:41.480399 containerd[2018]: 2025-09-09 05:12:41.369 [INFO][4734] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:41.482187 containerd[2018]: 2025-09-09 05:12:41.369 [INFO][4734] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" host="ip-172-31-30-120" Sep 9 05:12:41.482187 containerd[2018]: 2025-09-09 05:12:41.373 [INFO][4734] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45 Sep 9 05:12:41.482187 containerd[2018]: 2025-09-09 05:12:41.390 [INFO][4734] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" host="ip-172-31-30-120" Sep 9 05:12:41.482187 containerd[2018]: 2025-09-09 05:12:41.408 [INFO][4734] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.130/26] block=192.168.74.128/26 handle="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" host="ip-172-31-30-120" Sep 9 05:12:41.482187 containerd[2018]: 2025-09-09 05:12:41.408 [INFO][4734] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.130/26] handle="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" host="ip-172-31-30-120" Sep 9 05:12:41.482187 containerd[2018]: 2025-09-09 05:12:41.409 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:41.482187 containerd[2018]: 2025-09-09 05:12:41.409 [INFO][4734] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.130/26] IPv6=[] ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" HandleID="k8s-pod-network.e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Workload="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" Sep 9 05:12:41.482569 containerd[2018]: 2025-09-09 05:12:41.414 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0", GenerateName:"calico-kube-controllers-6db7d4b486-", Namespace:"calico-system", SelfLink:"", UID:"07d7cd1a-3a9d-482d-8637-67812c0449b7", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db7d4b486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"calico-kube-controllers-6db7d4b486-nw48g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f8b812d575", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:41.482710 containerd[2018]: 2025-09-09 05:12:41.415 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.130/32] ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" Sep 9 05:12:41.482710 containerd[2018]: 2025-09-09 05:12:41.415 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f8b812d575 ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" Sep 9 05:12:41.482710 containerd[2018]: 2025-09-09 05:12:41.432 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" Sep 9 05:12:41.482854 containerd[2018]: 2025-09-09 05:12:41.433 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0", GenerateName:"calico-kube-controllers-6db7d4b486-", Namespace:"calico-system", SelfLink:"", UID:"07d7cd1a-3a9d-482d-8637-67812c0449b7", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db7d4b486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45", Pod:"calico-kube-controllers-6db7d4b486-nw48g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f8b812d575", MAC:"ae:00:a7:e2:8a:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:41.482970 containerd[2018]: 2025-09-09 05:12:41.473 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" Namespace="calico-system" Pod="calico-kube-controllers-6db7d4b486-nw48g" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--kube--controllers--6db7d4b486--nw48g-eth0" Sep 9 05:12:41.552189 containerd[2018]: time="2025-09-09T05:12:41.552010773Z" level=info msg="connecting to shim e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45" address="unix:///run/containerd/s/39bb83980eb4ddcb694840b3efff8d883362717a64b2a9ee8cc166ce751b8b86" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:41.635070 systemd[1]: Started cri-containerd-e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45.scope - libcontainer container e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45. Sep 9 05:12:41.772987 containerd[2018]: time="2025-09-09T05:12:41.772465342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db7d4b486-nw48g,Uid:07d7cd1a-3a9d-482d-8637-67812c0449b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45\"" Sep 9 05:12:42.034147 containerd[2018]: time="2025-09-09T05:12:42.032990347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-m59gl,Uid:1164d331-715f-49ef-b981-2d35b8f156b6,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:12:42.088807 systemd-networkd[1620]: vxlan.calico: Link UP Sep 9 05:12:42.088835 systemd-networkd[1620]: vxlan.calico: Gained carrier Sep 9 05:12:42.366406 systemd-networkd[1620]: cali79ca33745a9: Link UP Sep 9 05:12:42.367527 systemd-networkd[1620]: cali79ca33745a9: Gained carrier Sep 9 05:12:42.418861 containerd[2018]: 2025-09-09 05:12:42.176 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0 calico-apiserver-5fc8557bf8- calico-apiserver 1164d331-715f-49ef-b981-2d35b8f156b6 822 0 2025-09-09 05:12:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fc8557bf8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-120 calico-apiserver-5fc8557bf8-m59gl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79ca33745a9 [] [] }} ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-" Sep 9 05:12:42.418861 containerd[2018]: 2025-09-09 05:12:42.177 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" Sep 9 05:12:42.418861 containerd[2018]: 2025-09-09 05:12:42.284 [INFO][4850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" HandleID="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Workload="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.285 [INFO][4850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" HandleID="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Workload="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-120", "pod":"calico-apiserver-5fc8557bf8-m59gl", "timestamp":"2025-09-09 05:12:42.28465682 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.285 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.285 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.285 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.303 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" host="ip-172-31-30-120" Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.313 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.321 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.325 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:42.422153 containerd[2018]: 2025-09-09 05:12:42.328 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:42.422620 containerd[2018]: 2025-09-09 05:12:42.328 [INFO][4850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" host="ip-172-31-30-120" Sep 9 05:12:42.422620 containerd[2018]: 2025-09-09 05:12:42.332 [INFO][4850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f Sep 9 05:12:42.422620 containerd[2018]: 2025-09-09 05:12:42.342 [INFO][4850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" host="ip-172-31-30-120" Sep 9 05:12:42.422620 containerd[2018]: 2025-09-09 05:12:42.354 [INFO][4850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.131/26] block=192.168.74.128/26 handle="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" host="ip-172-31-30-120" Sep 9 05:12:42.422620 containerd[2018]: 2025-09-09 05:12:42.354 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.131/26] handle="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" host="ip-172-31-30-120" Sep 9 05:12:42.422620 containerd[2018]: 2025-09-09 05:12:42.354 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:42.422620 containerd[2018]: 2025-09-09 05:12:42.354 [INFO][4850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.131/26] IPv6=[] ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" HandleID="k8s-pod-network.6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Workload="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" Sep 9 05:12:42.424450 containerd[2018]: 2025-09-09 05:12:42.358 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0", GenerateName:"calico-apiserver-5fc8557bf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1164d331-715f-49ef-b981-2d35b8f156b6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc8557bf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"calico-apiserver-5fc8557bf8-m59gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79ca33745a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:42.424633 containerd[2018]: 2025-09-09 05:12:42.358 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.131/32] ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" Sep 9 05:12:42.424633 containerd[2018]: 2025-09-09 05:12:42.358 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79ca33745a9 ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" Sep 9 05:12:42.424633 containerd[2018]: 2025-09-09 05:12:42.370 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" Sep 9 05:12:42.424784 containerd[2018]: 2025-09-09 05:12:42.371 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0", GenerateName:"calico-apiserver-5fc8557bf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1164d331-715f-49ef-b981-2d35b8f156b6", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc8557bf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f", Pod:"calico-apiserver-5fc8557bf8-m59gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79ca33745a9", MAC:"de:db:3d:ce:72:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:42.424908 containerd[2018]: 2025-09-09 05:12:42.410 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-m59gl" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--m59gl-eth0" Sep 9 05:12:42.481086 containerd[2018]: time="2025-09-09T05:12:42.480943653Z" level=info msg="connecting to shim 6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f" address="unix:///run/containerd/s/9331c1f97c99cabb98e298fba00afc0fe10885c6e32743cd7a2e41ced4a4adf5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:42.540376 systemd[1]: Started cri-containerd-6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f.scope - libcontainer container 6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f. Sep 9 05:12:42.582370 systemd-networkd[1620]: calia5aa73c9881: Gained IPv6LL Sep 9 05:12:42.659135 containerd[2018]: time="2025-09-09T05:12:42.657806026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-m59gl,Uid:1164d331-715f-49ef-b981-2d35b8f156b6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f\"" Sep 9 05:12:43.042096 containerd[2018]: time="2025-09-09T05:12:43.040964276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwrkc,Uid:8ece1b83-f963-4812-a6b3-c979112e2a60,Namespace:kube-system,Attempt:0,}" Sep 9 05:12:43.158217 systemd-networkd[1620]: cali6f8b812d575: Gained IPv6LL Sep 9 05:12:43.290829 containerd[2018]: time="2025-09-09T05:12:43.290770977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:43.295275 containerd[2018]: time="2025-09-09T05:12:43.294879153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 05:12:43.297234 containerd[2018]: time="2025-09-09T05:12:43.297141297Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:43.307411 containerd[2018]: time="2025-09-09T05:12:43.307303798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:43.312062 containerd[2018]: time="2025-09-09T05:12:43.311765098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.093403311s" Sep 9 05:12:43.312062 containerd[2018]: time="2025-09-09T05:12:43.311919814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 05:12:43.321805 containerd[2018]: time="2025-09-09T05:12:43.321719290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 05:12:43.324080 containerd[2018]: time="2025-09-09T05:12:43.323544370Z" level=info msg="CreateContainer within sandbox \"c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 05:12:43.344050 containerd[2018]: time="2025-09-09T05:12:43.343585222Z" level=info msg="Container 9de153121560bd1e45e3aac071fc6342b44ff3ee5515f340a67684365f86c91a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:43.380404 containerd[2018]: time="2025-09-09T05:12:43.380328838Z" level=info msg="CreateContainer within sandbox \"c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9de153121560bd1e45e3aac071fc6342b44ff3ee5515f340a67684365f86c91a\"" Sep 9 05:12:43.383644 containerd[2018]: time="2025-09-09T05:12:43.383577766Z" level=info msg="StartContainer for \"9de153121560bd1e45e3aac071fc6342b44ff3ee5515f340a67684365f86c91a\"" Sep 9 05:12:43.387443 containerd[2018]: time="2025-09-09T05:12:43.387350998Z" level=info msg="connecting to shim 9de153121560bd1e45e3aac071fc6342b44ff3ee5515f340a67684365f86c91a" address="unix:///run/containerd/s/3c8f8f468d588d9e342a9b94b19cbd9920b790ed7399e65bafad9fcb3776c2b4" protocol=ttrpc version=3 Sep 9 05:12:43.415056 systemd-networkd[1620]: cali807a9020347: Link UP Sep 9 05:12:43.417773 systemd-networkd[1620]: cali807a9020347: Gained carrier Sep 9 05:12:43.464391 systemd[1]: Started cri-containerd-9de153121560bd1e45e3aac071fc6342b44ff3ee5515f340a67684365f86c91a.scope - libcontainer container 9de153121560bd1e45e3aac071fc6342b44ff3ee5515f340a67684365f86c91a. Sep 9 05:12:43.500247 containerd[2018]: 2025-09-09 05:12:43.207 [INFO][4957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0 coredns-668d6bf9bc- kube-system 8ece1b83-f963-4812-a6b3-c979112e2a60 810 0 2025-09-09 05:11:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-120 coredns-668d6bf9bc-zwrkc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali807a9020347 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-" Sep 9 05:12:43.500247 containerd[2018]: 2025-09-09 05:12:43.207 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" Sep 9 05:12:43.500247 containerd[2018]: 2025-09-09 05:12:43.284 [INFO][4975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" HandleID="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Workload="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.284 [INFO][4975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" HandleID="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Workload="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-120", "pod":"coredns-668d6bf9bc-zwrkc", "timestamp":"2025-09-09 05:12:43.284190093 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.284 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.284 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.284 [INFO][4975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.306 [INFO][4975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" host="ip-172-31-30-120" Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.318 [INFO][4975] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.334 [INFO][4975] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.337 [INFO][4975] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:43.500935 containerd[2018]: 2025-09-09 05:12:43.351 [INFO][4975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:43.501449 containerd[2018]: 2025-09-09 05:12:43.351 [INFO][4975] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" host="ip-172-31-30-120" Sep 9 05:12:43.501449 containerd[2018]: 2025-09-09 05:12:43.357 [INFO][4975] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4 Sep 9 05:12:43.501449 containerd[2018]: 2025-09-09 05:12:43.369 [INFO][4975] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" host="ip-172-31-30-120" Sep 9 05:12:43.501449 containerd[2018]: 2025-09-09 05:12:43.391 [INFO][4975] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.132/26] block=192.168.74.128/26 handle="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" host="ip-172-31-30-120" Sep 9 05:12:43.501449 containerd[2018]: 2025-09-09 05:12:43.391 [INFO][4975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.132/26] handle="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" host="ip-172-31-30-120" Sep 9 05:12:43.501449 containerd[2018]: 2025-09-09 05:12:43.392 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:43.501449 containerd[2018]: 2025-09-09 05:12:43.392 [INFO][4975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.132/26] IPv6=[] ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" HandleID="k8s-pod-network.410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Workload="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" Sep 9 05:12:43.501782 containerd[2018]: 2025-09-09 05:12:43.407 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8ece1b83-f963-4812-a6b3-c979112e2a60", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"coredns-668d6bf9bc-zwrkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali807a9020347", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:43.501782 containerd[2018]: 2025-09-09 05:12:43.407 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.132/32] ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" Sep 9 05:12:43.501782 containerd[2018]: 2025-09-09 05:12:43.407 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali807a9020347 ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" Sep 9 05:12:43.501782 containerd[2018]: 2025-09-09 05:12:43.419 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" Sep 9 05:12:43.501782 containerd[2018]: 2025-09-09 05:12:43.423 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8ece1b83-f963-4812-a6b3-c979112e2a60", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4", Pod:"coredns-668d6bf9bc-zwrkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali807a9020347", MAC:"3a:5a:3d:de:48:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:43.501782 containerd[2018]: 2025-09-09 05:12:43.482 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-zwrkc" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--zwrkc-eth0" Sep 9 05:12:43.579188 containerd[2018]: time="2025-09-09T05:12:43.578383595Z" level=info msg="connecting to shim 410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4" address="unix:///run/containerd/s/d65844509de91fdcde77afb25d16446f195dbf83edbf243407ac15b10b5f03aa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:43.690338 systemd[1]: Started cri-containerd-410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4.scope - libcontainer container 410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4. Sep 9 05:12:43.734334 systemd-networkd[1620]: vxlan.calico: Gained IPv6LL Sep 9 05:12:43.797416 containerd[2018]: time="2025-09-09T05:12:43.797298912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwrkc,Uid:8ece1b83-f963-4812-a6b3-c979112e2a60,Namespace:kube-system,Attempt:0,} returns sandbox id \"410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4\"" Sep 9 05:12:43.806553 containerd[2018]: time="2025-09-09T05:12:43.806483928Z" level=info msg="CreateContainer within sandbox \"410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:12:43.835981 containerd[2018]: time="2025-09-09T05:12:43.835605060Z" level=info msg="Container da3e4b8da3bc0345f277d86011a450cde07e6a31eec79744ae0f19986fb96261: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:43.855931 containerd[2018]: time="2025-09-09T05:12:43.855849924Z" level=info msg="CreateContainer within sandbox \"410bfefea9afd3b491b12992583b797add035ef0701517638f1b5c951fcee4f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da3e4b8da3bc0345f277d86011a450cde07e6a31eec79744ae0f19986fb96261\"" Sep 9 05:12:43.858702 containerd[2018]: time="2025-09-09T05:12:43.858517980Z" level=info msg="StartContainer for \"da3e4b8da3bc0345f277d86011a450cde07e6a31eec79744ae0f19986fb96261\"" Sep 9 05:12:43.866323 containerd[2018]: time="2025-09-09T05:12:43.866097192Z" level=info msg="connecting to shim da3e4b8da3bc0345f277d86011a450cde07e6a31eec79744ae0f19986fb96261" address="unix:///run/containerd/s/d65844509de91fdcde77afb25d16446f195dbf83edbf243407ac15b10b5f03aa" protocol=ttrpc version=3 Sep 9 05:12:43.875424 containerd[2018]: time="2025-09-09T05:12:43.875371164Z" level=info msg="StartContainer for \"9de153121560bd1e45e3aac071fc6342b44ff3ee5515f340a67684365f86c91a\" returns successfully" Sep 9 05:12:43.930361 systemd[1]: Started cri-containerd-da3e4b8da3bc0345f277d86011a450cde07e6a31eec79744ae0f19986fb96261.scope - libcontainer container da3e4b8da3bc0345f277d86011a450cde07e6a31eec79744ae0f19986fb96261. Sep 9 05:12:44.029432 containerd[2018]: time="2025-09-09T05:12:44.029306757Z" level=info msg="StartContainer for \"da3e4b8da3bc0345f277d86011a450cde07e6a31eec79744ae0f19986fb96261\" returns successfully" Sep 9 05:12:44.033010 containerd[2018]: time="2025-09-09T05:12:44.032919669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wd6t,Uid:905712ae-cbe1-43ba-90ca-063ce5a5f097,Namespace:kube-system,Attempt:0,}" Sep 9 05:12:44.036367 containerd[2018]: time="2025-09-09T05:12:44.036197733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-kxfd4,Uid:a3835d68-456d-40aa-b365-a8f60a8cd66e,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:12:44.181436 systemd-networkd[1620]: cali79ca33745a9: Gained IPv6LL Sep 9 05:12:44.463926 systemd-networkd[1620]: cali1b4c0778deb: Link UP Sep 9 05:12:44.466703 systemd-networkd[1620]: cali1b4c0778deb: Gained carrier Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.196 [INFO][5105] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0 calico-apiserver-5fc8557bf8- calico-apiserver a3835d68-456d-40aa-b365-a8f60a8cd66e 820 0 2025-09-09 05:12:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fc8557bf8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-120 calico-apiserver-5fc8557bf8-kxfd4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1b4c0778deb [] [] }} ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.197 [INFO][5105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.300 [INFO][5121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" HandleID="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Workload="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.300 [INFO][5121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" HandleID="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Workload="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032ae10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-120", "pod":"calico-apiserver-5fc8557bf8-kxfd4", "timestamp":"2025-09-09 05:12:44.300149386 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.301 [INFO][5121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.301 [INFO][5121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.301 [INFO][5121] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.343 [INFO][5121] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.389 [INFO][5121] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.403 [INFO][5121] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.408 [INFO][5121] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.415 [INFO][5121] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.417 [INFO][5121] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.422 [INFO][5121] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140 Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.434 [INFO][5121] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.449 [INFO][5121] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.133/26] block=192.168.74.128/26 handle="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.449 [INFO][5121] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.133/26] handle="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" host="ip-172-31-30-120" Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.450 [INFO][5121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:44.514593 containerd[2018]: 2025-09-09 05:12:44.450 [INFO][5121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.133/26] IPv6=[] ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" HandleID="k8s-pod-network.c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Workload="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" Sep 9 05:12:44.517788 containerd[2018]: 2025-09-09 05:12:44.459 [INFO][5105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0", GenerateName:"calico-apiserver-5fc8557bf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3835d68-456d-40aa-b365-a8f60a8cd66e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc8557bf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"calico-apiserver-5fc8557bf8-kxfd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b4c0778deb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:44.517788 containerd[2018]: 2025-09-09 05:12:44.459 [INFO][5105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.133/32] ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" Sep 9 05:12:44.517788 containerd[2018]: 2025-09-09 05:12:44.459 [INFO][5105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b4c0778deb ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" Sep 9 05:12:44.517788 containerd[2018]: 2025-09-09 05:12:44.470 [INFO][5105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" Sep 9 05:12:44.517788 containerd[2018]: 2025-09-09 05:12:44.473 [INFO][5105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0", GenerateName:"calico-apiserver-5fc8557bf8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3835d68-456d-40aa-b365-a8f60a8cd66e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fc8557bf8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140", Pod:"calico-apiserver-5fc8557bf8-kxfd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1b4c0778deb", MAC:"de:75:dc:d2:b1:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:44.517788 containerd[2018]: 2025-09-09 05:12:44.507 [INFO][5105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" Namespace="calico-apiserver" Pod="calico-apiserver-5fc8557bf8-kxfd4" WorkloadEndpoint="ip--172--31--30--120-k8s-calico--apiserver--5fc8557bf8--kxfd4-eth0" Sep 9 05:12:44.604471 kubelet[3513]: I0909 05:12:44.604368 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwrkc" podStartSLOduration=49.60434298 podStartE2EDuration="49.60434298s" podCreationTimestamp="2025-09-09 05:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:12:44.593539176 +0000 UTC m=+55.786756442" watchObservedRunningTime="2025-09-09 05:12:44.60434298 +0000 UTC m=+55.797560246" Sep 9 05:12:44.636347 containerd[2018]: time="2025-09-09T05:12:44.636285240Z" level=info msg="connecting to shim c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140" address="unix:///run/containerd/s/c6980ecc339e8eb1df350a084e37eb6f2fcc987fe2b03fdb1f9eb0a34cb3ace2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:44.667526 systemd-networkd[1620]: cali807c23d02a4: Link UP Sep 9 05:12:44.667853 systemd-networkd[1620]: cali807c23d02a4: Gained carrier Sep 9 05:12:44.695532 systemd-networkd[1620]: cali807a9020347: Gained IPv6LL Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.289 [INFO][5098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0 coredns-668d6bf9bc- kube-system 905712ae-cbe1-43ba-90ca-063ce5a5f097 821 0 2025-09-09 05:11:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-120 coredns-668d6bf9bc-9wd6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali807c23d02a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.291 [INFO][5098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.380 [INFO][5130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" HandleID="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Workload="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.381 [INFO][5130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" HandleID="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Workload="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039c0b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-120", "pod":"coredns-668d6bf9bc-9wd6t", "timestamp":"2025-09-09 05:12:44.380226371 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.381 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.449 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.451 [INFO][5130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.489 [INFO][5130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.512 [INFO][5130] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.540 [INFO][5130] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.553 [INFO][5130] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.563 [INFO][5130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.563 [INFO][5130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.569 [INFO][5130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04 Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.583 [INFO][5130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.650 [INFO][5130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.134/26] block=192.168.74.128/26 handle="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.650 [INFO][5130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.134/26] handle="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" host="ip-172-31-30-120" Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.650 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:44.736282 containerd[2018]: 2025-09-09 05:12:44.650 [INFO][5130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.134/26] IPv6=[] ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" HandleID="k8s-pod-network.a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Workload="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" Sep 9 05:12:44.739634 containerd[2018]: 2025-09-09 05:12:44.659 [INFO][5098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"905712ae-cbe1-43ba-90ca-063ce5a5f097", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"coredns-668d6bf9bc-9wd6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali807c23d02a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:44.739634 containerd[2018]: 2025-09-09 05:12:44.659 [INFO][5098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.134/32] ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" Sep 9 05:12:44.739634 containerd[2018]: 2025-09-09 05:12:44.659 [INFO][5098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali807c23d02a4 ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" Sep 9 05:12:44.739634 containerd[2018]: 2025-09-09 05:12:44.667 [INFO][5098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" Sep 9 05:12:44.739634 containerd[2018]: 2025-09-09 05:12:44.668 [INFO][5098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"905712ae-cbe1-43ba-90ca-063ce5a5f097", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 11, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04", Pod:"coredns-668d6bf9bc-9wd6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali807c23d02a4", MAC:"a6:3c:ff:39:b1:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:44.739634 containerd[2018]: 2025-09-09 05:12:44.723 [INFO][5098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" Namespace="kube-system" Pod="coredns-668d6bf9bc-9wd6t" WorkloadEndpoint="ip--172--31--30--120-k8s-coredns--668d6bf9bc--9wd6t-eth0" Sep 9 05:12:44.780892 systemd[1]: Started cri-containerd-c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140.scope - libcontainer container c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140. Sep 9 05:12:44.854907 containerd[2018]: time="2025-09-09T05:12:44.854847625Z" level=info msg="connecting to shim a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04" address="unix:///run/containerd/s/855fd99da2bfe485a4fd78b1e89bb042dc58ff8ff80e7abde09cd95a4fcaf7ab" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:44.936492 systemd[1]: Started cri-containerd-a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04.scope - libcontainer container a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04. Sep 9 05:12:45.044916 containerd[2018]: time="2025-09-09T05:12:45.044803882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fwmcq,Uid:4d67339e-bbba-44a4-b8d7-f30c051b16ee,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:45.166594 containerd[2018]: time="2025-09-09T05:12:45.166521287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9wd6t,Uid:905712ae-cbe1-43ba-90ca-063ce5a5f097,Namespace:kube-system,Attempt:0,} returns sandbox id \"a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04\"" Sep 9 05:12:45.204506 containerd[2018]: time="2025-09-09T05:12:45.180250463Z" level=info msg="CreateContainer within sandbox \"a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:12:45.248063 containerd[2018]: time="2025-09-09T05:12:45.247948655Z" level=info msg="Container 889f1b82b104cfe86f3f6e948cb6e1765b014f11fae609030f7e2a4bd97b5f76: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:45.249999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518669214.mount: Deactivated successfully. Sep 9 05:12:45.288371 containerd[2018]: time="2025-09-09T05:12:45.288147707Z" level=info msg="CreateContainer within sandbox \"a59f8f98c4fae5f98daeff36eef2fb2718bbab58dba876649108678c5bfb4f04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"889f1b82b104cfe86f3f6e948cb6e1765b014f11fae609030f7e2a4bd97b5f76\"" Sep 9 05:12:45.291661 containerd[2018]: time="2025-09-09T05:12:45.291378263Z" level=info msg="StartContainer for \"889f1b82b104cfe86f3f6e948cb6e1765b014f11fae609030f7e2a4bd97b5f76\"" Sep 9 05:12:45.307347 containerd[2018]: time="2025-09-09T05:12:45.306493691Z" level=info msg="connecting to shim 889f1b82b104cfe86f3f6e948cb6e1765b014f11fae609030f7e2a4bd97b5f76" address="unix:///run/containerd/s/855fd99da2bfe485a4fd78b1e89bb042dc58ff8ff80e7abde09cd95a4fcaf7ab" protocol=ttrpc version=3 Sep 9 05:12:45.356675 containerd[2018]: time="2025-09-09T05:12:45.356600460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fc8557bf8-kxfd4,Uid:a3835d68-456d-40aa-b365-a8f60a8cd66e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140\"" Sep 9 05:12:45.388536 systemd[1]: Started cri-containerd-889f1b82b104cfe86f3f6e948cb6e1765b014f11fae609030f7e2a4bd97b5f76.scope - libcontainer container 889f1b82b104cfe86f3f6e948cb6e1765b014f11fae609030f7e2a4bd97b5f76. Sep 9 05:12:45.525873 containerd[2018]: time="2025-09-09T05:12:45.524916025Z" level=info msg="StartContainer for \"889f1b82b104cfe86f3f6e948cb6e1765b014f11fae609030f7e2a4bd97b5f76\" returns successfully" Sep 9 05:12:45.623318 systemd-networkd[1620]: cali83367e61235: Link UP Sep 9 05:12:45.629794 systemd-networkd[1620]: cali83367e61235: Gained carrier Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.279 [INFO][5238] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0 goldmane-54d579b49d- calico-system 4d67339e-bbba-44a4-b8d7-f30c051b16ee 817 0 2025-09-09 05:12:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-120 goldmane-54d579b49d-fwmcq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali83367e61235 [] [] }} ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.279 [INFO][5238] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.451 [INFO][5268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" HandleID="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Workload="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.451 [INFO][5268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" HandleID="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Workload="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cba60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"goldmane-54d579b49d-fwmcq", "timestamp":"2025-09-09 05:12:45.451338492 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.453 [INFO][5268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.454 [INFO][5268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.454 [INFO][5268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.492 [INFO][5268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.506 [INFO][5268] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.519 [INFO][5268] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.527 [INFO][5268] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.534 [INFO][5268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.535 [INFO][5268] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.541 [INFO][5268] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.552 [INFO][5268] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.585 [INFO][5268] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.135/26] block=192.168.74.128/26 handle="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.585 [INFO][5268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.135/26] handle="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" host="ip-172-31-30-120" Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.585 [INFO][5268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:45.701171 containerd[2018]: 2025-09-09 05:12:45.585 [INFO][5268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.135/26] IPv6=[] ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" HandleID="k8s-pod-network.90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Workload="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" Sep 9 05:12:45.703408 containerd[2018]: 2025-09-09 05:12:45.609 [INFO][5238] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4d67339e-bbba-44a4-b8d7-f30c051b16ee", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"goldmane-54d579b49d-fwmcq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83367e61235", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:45.703408 containerd[2018]: 2025-09-09 05:12:45.609 [INFO][5238] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.135/32] ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" Sep 9 05:12:45.703408 containerd[2018]: 2025-09-09 05:12:45.609 [INFO][5238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83367e61235 ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" Sep 9 05:12:45.703408 containerd[2018]: 2025-09-09 05:12:45.631 [INFO][5238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" Sep 9 05:12:45.703408 containerd[2018]: 2025-09-09 05:12:45.633 [INFO][5238] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4d67339e-bbba-44a4-b8d7-f30c051b16ee", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c", Pod:"goldmane-54d579b49d-fwmcq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83367e61235", MAC:"ee:72:bf:70:f1:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:45.703408 containerd[2018]: 2025-09-09 05:12:45.692 [INFO][5238] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" Namespace="calico-system" Pod="goldmane-54d579b49d-fwmcq" WorkloadEndpoint="ip--172--31--30--120-k8s-goldmane--54d579b49d--fwmcq-eth0" Sep 9 05:12:45.705192 kubelet[3513]: I0909 05:12:45.704330 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9wd6t" podStartSLOduration=50.704278729 podStartE2EDuration="50.704278729s" podCreationTimestamp="2025-09-09 05:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:12:45.678614365 +0000 UTC m=+56.871831691" watchObservedRunningTime="2025-09-09 05:12:45.704278729 +0000 UTC m=+56.897496103" Sep 9 05:12:45.792766 containerd[2018]: time="2025-09-09T05:12:45.792472358Z" level=info msg="connecting to shim 90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c" address="unix:///run/containerd/s/e0d837828586f7d6e0dea6f61aca15825ef02fb9cc07072780cc6c37d8144144" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:45.922528 systemd[1]: Started cri-containerd-90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c.scope - libcontainer container 90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c. Sep 9 05:12:46.033658 containerd[2018]: time="2025-09-09T05:12:46.033257291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxj2z,Uid:76a55f58-17dd-4feb-9f04-7ed611b8fe4a,Namespace:calico-system,Attempt:0,}" Sep 9 05:12:46.409802 containerd[2018]: time="2025-09-09T05:12:46.409587685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fwmcq,Uid:4d67339e-bbba-44a4-b8d7-f30c051b16ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c\"" Sep 9 05:12:46.485295 systemd-networkd[1620]: cali1b4c0778deb: Gained IPv6LL Sep 9 05:12:46.516123 systemd-networkd[1620]: cali3290ae37852: Link UP Sep 9 05:12:46.517944 systemd-networkd[1620]: cali3290ae37852: Gained carrier Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.307 [INFO][5361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0 csi-node-driver- calico-system 76a55f58-17dd-4feb-9f04-7ed611b8fe4a 678 0 2025-09-09 05:12:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-120 csi-node-driver-vxj2z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3290ae37852 [] [] }} ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.308 [INFO][5361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.401 [INFO][5375] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" HandleID="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Workload="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.402 [INFO][5375] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" HandleID="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Workload="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-120", "pod":"csi-node-driver-vxj2z", "timestamp":"2025-09-09 05:12:46.401826577 +0000 UTC"}, Hostname:"ip-172-31-30-120", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.402 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.403 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.403 [INFO][5375] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-120' Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.428 [INFO][5375] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.437 [INFO][5375] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.447 [INFO][5375] ipam/ipam.go 511: Trying affinity for 192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.452 [INFO][5375] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.459 [INFO][5375] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.128/26 host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.459 [INFO][5375] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.74.128/26 handle="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.463 [INFO][5375] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24 Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.471 [INFO][5375] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.74.128/26 handle="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.494 [INFO][5375] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.74.136/26] block=192.168.74.128/26 handle="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.494 [INFO][5375] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.136/26] handle="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" host="ip-172-31-30-120" Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.494 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:12:46.556164 containerd[2018]: 2025-09-09 05:12:46.494 [INFO][5375] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.74.136/26] IPv6=[] ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" HandleID="k8s-pod-network.52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Workload="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" Sep 9 05:12:46.560542 containerd[2018]: 2025-09-09 05:12:46.503 [INFO][5361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76a55f58-17dd-4feb-9f04-7ed611b8fe4a", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"", Pod:"csi-node-driver-vxj2z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3290ae37852", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:46.560542 containerd[2018]: 2025-09-09 05:12:46.503 [INFO][5361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.136/32] ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" Sep 9 05:12:46.560542 containerd[2018]: 2025-09-09 05:12:46.504 [INFO][5361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3290ae37852 ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" Sep 9 05:12:46.560542 containerd[2018]: 2025-09-09 05:12:46.518 [INFO][5361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" Sep 9 05:12:46.560542 containerd[2018]: 2025-09-09 05:12:46.519 [INFO][5361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76a55f58-17dd-4feb-9f04-7ed611b8fe4a", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-120", ContainerID:"52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24", Pod:"csi-node-driver-vxj2z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3290ae37852", MAC:"66:6c:84:60:39:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:12:46.560542 containerd[2018]: 2025-09-09 05:12:46.543 [INFO][5361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" Namespace="calico-system" Pod="csi-node-driver-vxj2z" WorkloadEndpoint="ip--172--31--30--120-k8s-csi--node--driver--vxj2z-eth0" Sep 9 05:12:46.613231 systemd-networkd[1620]: cali807c23d02a4: Gained IPv6LL Sep 9 05:12:46.646853 containerd[2018]: time="2025-09-09T05:12:46.645315386Z" level=info msg="connecting to shim 52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24" address="unix:///run/containerd/s/59907dd13e37871564cf5b0f7e0d092339dda760f9d3b826681ce7bc39a2a3eb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:12:46.724393 systemd[1]: Started cri-containerd-52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24.scope - libcontainer container 52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24. Sep 9 05:12:46.872569 containerd[2018]: time="2025-09-09T05:12:46.872517819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vxj2z,Uid:76a55f58-17dd-4feb-9f04-7ed611b8fe4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24\"" Sep 9 05:12:47.455422 kubelet[3513]: I0909 05:12:47.455363 3513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:12:47.510971 systemd-networkd[1620]: cali83367e61235: Gained IPv6LL Sep 9 05:12:47.776951 containerd[2018]: time="2025-09-09T05:12:47.776263300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:47.781753 containerd[2018]: time="2025-09-09T05:12:47.781684828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 05:12:47.787052 containerd[2018]: time="2025-09-09T05:12:47.785908348Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:47.797196 containerd[2018]: time="2025-09-09T05:12:47.797081356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:47.801305 containerd[2018]: time="2025-09-09T05:12:47.801110284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.479290626s" Sep 9 05:12:47.801305 containerd[2018]: time="2025-09-09T05:12:47.801181708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 05:12:47.805733 containerd[2018]: time="2025-09-09T05:12:47.805282924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:12:47.813344 containerd[2018]: time="2025-09-09T05:12:47.812955352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\" id:\"703c00954fd49fef82354a31da2531dbc77f6801195e8dd78431943bd7f59790\" pid:5457 exited_at:{seconds:1757394767 nanos:806501368}" Sep 9 05:12:47.832084 containerd[2018]: time="2025-09-09T05:12:47.831998248Z" level=info msg="CreateContainer within sandbox \"e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 05:12:47.858048 containerd[2018]: time="2025-09-09T05:12:47.857977612Z" level=info msg="Container 04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:47.906353 containerd[2018]: time="2025-09-09T05:12:47.906251296Z" level=info msg="CreateContainer within sandbox \"e7c41eb5be49249434876e3b18d8302b7d9934a618552a324406c3c8fe751d45\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\"" Sep 9 05:12:47.909178 containerd[2018]: time="2025-09-09T05:12:47.909096712Z" level=info msg="StartContainer for \"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\"" Sep 9 05:12:47.912152 containerd[2018]: time="2025-09-09T05:12:47.911844856Z" level=info msg="connecting to shim 04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e" address="unix:///run/containerd/s/39bb83980eb4ddcb694840b3efff8d883362717a64b2a9ee8cc166ce751b8b86" protocol=ttrpc version=3 Sep 9 05:12:47.987694 systemd[1]: Started cri-containerd-04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e.scope - libcontainer container 04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e. Sep 9 05:12:48.116993 containerd[2018]: time="2025-09-09T05:12:48.116825893Z" level=info msg="StartContainer for \"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\" returns successfully" Sep 9 05:12:48.178482 containerd[2018]: time="2025-09-09T05:12:48.178418234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\" id:\"7de53a094a5f13b7332cae1475293eb9a1050b45dbc774f31f3abd8f72ca98c3\" pid:5485 exited_at:{seconds:1757394768 nanos:175622942}" Sep 9 05:12:48.533220 systemd-networkd[1620]: cali3290ae37852: Gained IPv6LL Sep 9 05:12:48.792870 containerd[2018]: time="2025-09-09T05:12:48.792684101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\" id:\"651ae0d29935cd490fdf6cb297c4a85e6fe4f64364691a4df1f63f21bc397a7a\" pid:5555 exited_at:{seconds:1757394768 nanos:791854841}" Sep 9 05:12:48.856273 kubelet[3513]: I0909 05:12:48.856155 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6db7d4b486-nw48g" podStartSLOduration=24.827400959 podStartE2EDuration="30.856130309s" podCreationTimestamp="2025-09-09 05:12:18 +0000 UTC" firstStartedPulling="2025-09-09 05:12:41.775212922 +0000 UTC m=+52.968430188" lastFinishedPulling="2025-09-09 05:12:47.803942284 +0000 UTC m=+58.997159538" observedRunningTime="2025-09-09 05:12:48.693288388 +0000 UTC m=+59.886505702" watchObservedRunningTime="2025-09-09 05:12:48.856130309 +0000 UTC m=+60.049347959" Sep 9 05:12:51.260534 ntpd[1990]: Listen normally on 6 vxlan.calico 192.168.74.128:123 Sep 9 05:12:51.261692 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 6 vxlan.calico 192.168.74.128:123 Sep 9 05:12:51.261827 ntpd[1990]: Listen normally on 7 calia5aa73c9881 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 7 calia5aa73c9881 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 8 cali6f8b812d575 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 9 vxlan.calico [fe80::645f:29ff:fed6:a68a%6]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 10 cali79ca33745a9 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 11 cali807a9020347 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 12 cali1b4c0778deb [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 13 cali807c23d02a4 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 14 cali83367e61235 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 9 05:12:51.262922 ntpd[1990]: 9 Sep 05:12:51 ntpd[1990]: Listen normally on 15 cali3290ae37852 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 05:12:51.261996 ntpd[1990]: Listen normally on 8 cali6f8b812d575 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 9 05:12:51.262125 ntpd[1990]: Listen normally on 9 vxlan.calico [fe80::645f:29ff:fed6:a68a%6]:123 Sep 9 05:12:51.262196 ntpd[1990]: Listen normally on 10 cali79ca33745a9 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 9 05:12:51.262262 ntpd[1990]: Listen normally on 11 cali807a9020347 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 9 05:12:51.262327 ntpd[1990]: Listen normally on 12 cali1b4c0778deb [fe80::ecee:eeff:feee:eeee%11]:123 Sep 9 05:12:51.262392 ntpd[1990]: Listen normally on 13 cali807c23d02a4 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 9 05:12:51.262465 ntpd[1990]: Listen normally on 14 cali83367e61235 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 9 05:12:51.262532 ntpd[1990]: Listen normally on 15 cali3290ae37852 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 9 05:12:53.196655 systemd[1]: Started sshd@7-172.31.30.120:22-147.75.109.163:35540.service - OpenSSH per-connection server daemon (147.75.109.163:35540). Sep 9 05:12:53.472636 sshd[5575]: Accepted publickey for core from 147.75.109.163 port 35540 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:12:53.477898 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:53.504476 systemd-logind[1998]: New session 8 of user core. Sep 9 05:12:53.509677 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:12:53.923535 sshd[5585]: Connection closed by 147.75.109.163 port 35540 Sep 9 05:12:53.924287 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:53.937432 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:12:53.939617 systemd[1]: sshd@7-172.31.30.120:22-147.75.109.163:35540.service: Deactivated successfully. Sep 9 05:12:53.957927 systemd-logind[1998]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:12:53.967484 systemd-logind[1998]: Removed session 8. Sep 9 05:12:54.476560 containerd[2018]: time="2025-09-09T05:12:54.476450985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:54.479947 containerd[2018]: time="2025-09-09T05:12:54.479785425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 05:12:54.482894 containerd[2018]: time="2025-09-09T05:12:54.482509209Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:54.490268 containerd[2018]: time="2025-09-09T05:12:54.490171377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:54.493057 containerd[2018]: time="2025-09-09T05:12:54.492254037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 6.686903517s" Sep 9 05:12:54.493057 containerd[2018]: time="2025-09-09T05:12:54.492321705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 05:12:54.498698 containerd[2018]: time="2025-09-09T05:12:54.497696901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 05:12:54.504367 containerd[2018]: time="2025-09-09T05:12:54.504301377Z" level=info msg="CreateContainer within sandbox \"6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:12:54.526286 containerd[2018]: time="2025-09-09T05:12:54.526216653Z" level=info msg="Container bf57b3f4e43740e21c724e5851cde702445d26d01c7a48f7d25728146cf001e3: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:54.544883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002068091.mount: Deactivated successfully. Sep 9 05:12:54.569388 containerd[2018]: time="2025-09-09T05:12:54.568897869Z" level=info msg="CreateContainer within sandbox \"6a615b5def4e5e462ec032afd51c656116632ac1fb88b04ca0058f5f3bc5f44f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf57b3f4e43740e21c724e5851cde702445d26d01c7a48f7d25728146cf001e3\"" Sep 9 05:12:54.577061 containerd[2018]: time="2025-09-09T05:12:54.575308930Z" level=info msg="StartContainer for \"bf57b3f4e43740e21c724e5851cde702445d26d01c7a48f7d25728146cf001e3\"" Sep 9 05:12:54.579060 containerd[2018]: time="2025-09-09T05:12:54.578966986Z" level=info msg="connecting to shim bf57b3f4e43740e21c724e5851cde702445d26d01c7a48f7d25728146cf001e3" address="unix:///run/containerd/s/9331c1f97c99cabb98e298fba00afc0fe10885c6e32743cd7a2e41ced4a4adf5" protocol=ttrpc version=3 Sep 9 05:12:54.681655 systemd[1]: Started cri-containerd-bf57b3f4e43740e21c724e5851cde702445d26d01c7a48f7d25728146cf001e3.scope - libcontainer container bf57b3f4e43740e21c724e5851cde702445d26d01c7a48f7d25728146cf001e3. Sep 9 05:12:54.817591 containerd[2018]: time="2025-09-09T05:12:54.817402907Z" level=info msg="StartContainer for \"bf57b3f4e43740e21c724e5851cde702445d26d01c7a48f7d25728146cf001e3\" returns successfully" Sep 9 05:12:55.743063 kubelet[3513]: I0909 05:12:55.742633 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fc8557bf8-m59gl" podStartSLOduration=36.915230136 podStartE2EDuration="48.742609007s" podCreationTimestamp="2025-09-09 05:12:07 +0000 UTC" firstStartedPulling="2025-09-09 05:12:42.669589726 +0000 UTC m=+53.862806992" lastFinishedPulling="2025-09-09 05:12:54.496968609 +0000 UTC m=+65.690185863" observedRunningTime="2025-09-09 05:12:55.740253059 +0000 UTC m=+66.933470325" watchObservedRunningTime="2025-09-09 05:12:55.742609007 +0000 UTC m=+66.935826273" Sep 9 05:12:58.970304 systemd[1]: Started sshd@8-172.31.30.120:22-147.75.109.163:35552.service - OpenSSH per-connection server daemon (147.75.109.163:35552). Sep 9 05:12:59.200867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633558896.mount: Deactivated successfully. Sep 9 05:12:59.240968 sshd[5649]: Accepted publickey for core from 147.75.109.163 port 35552 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:12:59.247143 containerd[2018]: time="2025-09-09T05:12:59.246282985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:59.246717 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:12:59.250340 containerd[2018]: time="2025-09-09T05:12:59.249759505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 05:12:59.260508 containerd[2018]: time="2025-09-09T05:12:59.258905389Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:59.268890 systemd-logind[1998]: New session 9 of user core. Sep 9 05:12:59.280232 containerd[2018]: time="2025-09-09T05:12:59.280125985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:59.288497 containerd[2018]: time="2025-09-09T05:12:59.288430633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 4.790669772s" Sep 9 05:12:59.288667 containerd[2018]: time="2025-09-09T05:12:59.288625117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 05:12:59.295318 containerd[2018]: time="2025-09-09T05:12:59.295256377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:12:59.296531 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:12:59.316219 containerd[2018]: time="2025-09-09T05:12:59.313527361Z" level=info msg="CreateContainer within sandbox \"c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 05:12:59.355841 containerd[2018]: time="2025-09-09T05:12:59.355218625Z" level=info msg="Container 8f1a0de3881e057bbb7f508c219a9acf1542f8227f2eedb7d07ff7169b0a5f29: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:59.355985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704442256.mount: Deactivated successfully. Sep 9 05:12:59.377054 containerd[2018]: time="2025-09-09T05:12:59.376948261Z" level=info msg="CreateContainer within sandbox \"c162ea6156c610f76b6eb8b8078fd140f0d7314967b06ad3052eb61c821d82be\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8f1a0de3881e057bbb7f508c219a9acf1542f8227f2eedb7d07ff7169b0a5f29\"" Sep 9 05:12:59.379970 containerd[2018]: time="2025-09-09T05:12:59.379738669Z" level=info msg="StartContainer for \"8f1a0de3881e057bbb7f508c219a9acf1542f8227f2eedb7d07ff7169b0a5f29\"" Sep 9 05:12:59.388222 containerd[2018]: time="2025-09-09T05:12:59.388091437Z" level=info msg="connecting to shim 8f1a0de3881e057bbb7f508c219a9acf1542f8227f2eedb7d07ff7169b0a5f29" address="unix:///run/containerd/s/3c8f8f468d588d9e342a9b94b19cbd9920b790ed7399e65bafad9fcb3776c2b4" protocol=ttrpc version=3 Sep 9 05:12:59.450042 systemd[1]: Started cri-containerd-8f1a0de3881e057bbb7f508c219a9acf1542f8227f2eedb7d07ff7169b0a5f29.scope - libcontainer container 8f1a0de3881e057bbb7f508c219a9acf1542f8227f2eedb7d07ff7169b0a5f29. Sep 9 05:12:59.700666 containerd[2018]: time="2025-09-09T05:12:59.700519095Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:12:59.707117 containerd[2018]: time="2025-09-09T05:12:59.706997991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:12:59.711074 sshd[5656]: Connection closed by 147.75.109.163 port 35552 Sep 9 05:12:59.710505 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Sep 9 05:12:59.722736 containerd[2018]: time="2025-09-09T05:12:59.722683947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 427.332014ms" Sep 9 05:12:59.723113 containerd[2018]: time="2025-09-09T05:12:59.722925459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 05:12:59.724345 systemd[1]: sshd@8-172.31.30.120:22-147.75.109.163:35552.service: Deactivated successfully. Sep 9 05:12:59.728501 containerd[2018]: time="2025-09-09T05:12:59.728433987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 05:12:59.734012 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:12:59.739271 containerd[2018]: time="2025-09-09T05:12:59.739219467Z" level=info msg="CreateContainer within sandbox \"c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:12:59.741451 systemd-logind[1998]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:12:59.749667 systemd-logind[1998]: Removed session 9. Sep 9 05:12:59.780056 containerd[2018]: time="2025-09-09T05:12:59.779987007Z" level=info msg="Container 2ca5675abf4d988fa2bede945a9ad76f1ef2acc38b33b68b61413570a6d0fb7d: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:12:59.809429 containerd[2018]: time="2025-09-09T05:12:59.808450803Z" level=info msg="CreateContainer within sandbox \"c5b8d204186904fcac2000a9116b1cddf07a641920f3160bc44975abd5245140\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ca5675abf4d988fa2bede945a9ad76f1ef2acc38b33b68b61413570a6d0fb7d\"" Sep 9 05:12:59.812358 containerd[2018]: time="2025-09-09T05:12:59.812310760Z" level=info msg="StartContainer for \"2ca5675abf4d988fa2bede945a9ad76f1ef2acc38b33b68b61413570a6d0fb7d\"" Sep 9 05:12:59.816326 containerd[2018]: time="2025-09-09T05:12:59.815982340Z" level=info msg="connecting to shim 2ca5675abf4d988fa2bede945a9ad76f1ef2acc38b33b68b61413570a6d0fb7d" address="unix:///run/containerd/s/c6980ecc339e8eb1df350a084e37eb6f2fcc987fe2b03fdb1f9eb0a34cb3ace2" protocol=ttrpc version=3 Sep 9 05:12:59.859988 systemd[1]: Started cri-containerd-2ca5675abf4d988fa2bede945a9ad76f1ef2acc38b33b68b61413570a6d0fb7d.scope - libcontainer container 2ca5675abf4d988fa2bede945a9ad76f1ef2acc38b33b68b61413570a6d0fb7d. Sep 9 05:12:59.964817 containerd[2018]: time="2025-09-09T05:12:59.964672180Z" level=info msg="StartContainer for \"8f1a0de3881e057bbb7f508c219a9acf1542f8227f2eedb7d07ff7169b0a5f29\" returns successfully" Sep 9 05:13:00.203273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343932158.mount: Deactivated successfully. Sep 9 05:13:00.342891 containerd[2018]: time="2025-09-09T05:13:00.342715838Z" level=info msg="StartContainer for \"2ca5675abf4d988fa2bede945a9ad76f1ef2acc38b33b68b61413570a6d0fb7d\" returns successfully" Sep 9 05:13:00.936541 kubelet[3513]: I0909 05:13:00.936443 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-674f66d6bb-rkcxx" podStartSLOduration=3.856651163 podStartE2EDuration="21.936420581s" podCreationTimestamp="2025-09-09 05:12:39 +0000 UTC" firstStartedPulling="2025-09-09 05:12:41.214369627 +0000 UTC m=+52.407586881" lastFinishedPulling="2025-09-09 05:12:59.294139045 +0000 UTC m=+70.487356299" observedRunningTime="2025-09-09 05:13:00.853282529 +0000 UTC m=+72.046499795" watchObservedRunningTime="2025-09-09 05:13:00.936420581 +0000 UTC m=+72.129637847" Sep 9 05:13:01.816404 kubelet[3513]: I0909 05:13:01.816341 3513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:13:03.423325 kubelet[3513]: I0909 05:13:03.423144 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fc8557bf8-kxfd4" podStartSLOduration=42.060831338 podStartE2EDuration="56.423094361s" podCreationTimestamp="2025-09-09 05:12:07 +0000 UTC" firstStartedPulling="2025-09-09 05:12:45.36418512 +0000 UTC m=+56.557402386" lastFinishedPulling="2025-09-09 05:12:59.726448155 +0000 UTC m=+70.919665409" observedRunningTime="2025-09-09 05:13:00.936330329 +0000 UTC m=+72.129547631" watchObservedRunningTime="2025-09-09 05:13:03.423094361 +0000 UTC m=+74.616311699" Sep 9 05:13:04.756651 systemd[1]: Started sshd@9-172.31.30.120:22-147.75.109.163:52322.service - OpenSSH per-connection server daemon (147.75.109.163:52322). Sep 9 05:13:05.001058 sshd[5749]: Accepted publickey for core from 147.75.109.163 port 52322 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:05.005651 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:05.022133 systemd-logind[1998]: New session 10 of user core. Sep 9 05:13:05.032323 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:13:05.398232 sshd[5752]: Connection closed by 147.75.109.163 port 52322 Sep 9 05:13:05.397892 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:05.407380 systemd[1]: sshd@9-172.31.30.120:22-147.75.109.163:52322.service: Deactivated successfully. Sep 9 05:13:05.414003 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:13:05.422317 systemd-logind[1998]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:13:05.451925 systemd[1]: Started sshd@10-172.31.30.120:22-147.75.109.163:52332.service - OpenSSH per-connection server daemon (147.75.109.163:52332). Sep 9 05:13:05.456626 systemd-logind[1998]: Removed session 10. Sep 9 05:13:05.662626 sshd[5766]: Accepted publickey for core from 147.75.109.163 port 52332 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:05.667615 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:05.685999 systemd-logind[1998]: New session 11 of user core. Sep 9 05:13:05.689308 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:13:06.252053 sshd[5769]: Connection closed by 147.75.109.163 port 52332 Sep 9 05:13:06.253614 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:06.272320 systemd-logind[1998]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:13:06.273003 systemd[1]: sshd@10-172.31.30.120:22-147.75.109.163:52332.service: Deactivated successfully. Sep 9 05:13:06.281415 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:13:06.314450 systemd[1]: Started sshd@11-172.31.30.120:22-147.75.109.163:52334.service - OpenSSH per-connection server daemon (147.75.109.163:52334). Sep 9 05:13:06.326327 systemd-logind[1998]: Removed session 11. Sep 9 05:13:06.558615 sshd[5783]: Accepted publickey for core from 147.75.109.163 port 52334 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:06.563764 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:06.581115 systemd-logind[1998]: New session 12 of user core. Sep 9 05:13:06.587334 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:13:07.057834 sshd[5786]: Connection closed by 147.75.109.163 port 52334 Sep 9 05:13:07.059195 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:07.072449 systemd[1]: sshd@11-172.31.30.120:22-147.75.109.163:52334.service: Deactivated successfully. Sep 9 05:13:07.080132 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:13:07.092304 systemd-logind[1998]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:13:07.096873 systemd-logind[1998]: Removed session 12. Sep 9 05:13:07.734094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318308093.mount: Deactivated successfully. Sep 9 05:13:09.011671 containerd[2018]: time="2025-09-09T05:13:09.011586885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:09.016054 containerd[2018]: time="2025-09-09T05:13:09.015658053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 05:13:09.017903 containerd[2018]: time="2025-09-09T05:13:09.017816613Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:09.026049 containerd[2018]: time="2025-09-09T05:13:09.025870881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:09.029157 containerd[2018]: time="2025-09-09T05:13:09.028937661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 9.30024709s" Sep 9 05:13:09.029157 containerd[2018]: time="2025-09-09T05:13:09.029001765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 05:13:09.032052 containerd[2018]: time="2025-09-09T05:13:09.031656093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 05:13:09.039052 containerd[2018]: time="2025-09-09T05:13:09.038273733Z" level=info msg="CreateContainer within sandbox \"90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 05:13:09.059127 containerd[2018]: time="2025-09-09T05:13:09.058430241Z" level=info msg="Container c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:13:09.077779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558069726.mount: Deactivated successfully. Sep 9 05:13:09.100137 containerd[2018]: time="2025-09-09T05:13:09.099008386Z" level=info msg="CreateContainer within sandbox \"90f8d77f6350a0dea9d1558d3e67cf2b34b1c3ff369ad1076d5fabd192c97d5c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\"" Sep 9 05:13:09.101191 containerd[2018]: time="2025-09-09T05:13:09.101077810Z" level=info msg="StartContainer for \"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\"" Sep 9 05:13:09.103873 containerd[2018]: time="2025-09-09T05:13:09.103805026Z" level=info msg="connecting to shim c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623" address="unix:///run/containerd/s/e0d837828586f7d6e0dea6f61aca15825ef02fb9cc07072780cc6c37d8144144" protocol=ttrpc version=3 Sep 9 05:13:09.156183 systemd[1]: Started cri-containerd-c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623.scope - libcontainer container c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623. Sep 9 05:13:09.391476 containerd[2018]: time="2025-09-09T05:13:09.389511311Z" level=info msg="StartContainer for \"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" returns successfully" Sep 9 05:13:10.535808 containerd[2018]: time="2025-09-09T05:13:10.535302913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" id:\"a862afd04f5e60385563358177f5162a4a74756ce60c532ba0d6acfcd941e11a\" pid:5854 exit_status:1 exited_at:{seconds:1757394790 nanos:523082689}" Sep 9 05:13:10.981356 containerd[2018]: time="2025-09-09T05:13:10.981285183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:10.983148 containerd[2018]: time="2025-09-09T05:13:10.983066007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 05:13:10.988557 containerd[2018]: time="2025-09-09T05:13:10.988475295Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:10.996049 containerd[2018]: time="2025-09-09T05:13:10.995772495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:10.999399 containerd[2018]: time="2025-09-09T05:13:10.999314691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.967585782s" Sep 9 05:13:10.999399 containerd[2018]: time="2025-09-09T05:13:10.999381555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 05:13:11.011125 containerd[2018]: time="2025-09-09T05:13:11.011047319Z" level=info msg="CreateContainer within sandbox \"52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 05:13:11.071434 containerd[2018]: time="2025-09-09T05:13:11.071361011Z" level=info msg="Container 74344e0c3d38406050fe10d5f558e739dea4c76e78c2d51658ec0f6ba766755a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:13:11.100159 containerd[2018]: time="2025-09-09T05:13:11.100075416Z" level=info msg="CreateContainer within sandbox \"52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"74344e0c3d38406050fe10d5f558e739dea4c76e78c2d51658ec0f6ba766755a\"" Sep 9 05:13:11.101570 containerd[2018]: time="2025-09-09T05:13:11.101512596Z" level=info msg="StartContainer for \"74344e0c3d38406050fe10d5f558e739dea4c76e78c2d51658ec0f6ba766755a\"" Sep 9 05:13:11.105616 containerd[2018]: time="2025-09-09T05:13:11.105489612Z" level=info msg="connecting to shim 74344e0c3d38406050fe10d5f558e739dea4c76e78c2d51658ec0f6ba766755a" address="unix:///run/containerd/s/59907dd13e37871564cf5b0f7e0d092339dda760f9d3b826681ce7bc39a2a3eb" protocol=ttrpc version=3 Sep 9 05:13:11.171468 systemd[1]: Started cri-containerd-74344e0c3d38406050fe10d5f558e739dea4c76e78c2d51658ec0f6ba766755a.scope - libcontainer container 74344e0c3d38406050fe10d5f558e739dea4c76e78c2d51658ec0f6ba766755a. Sep 9 05:13:11.801767 containerd[2018]: time="2025-09-09T05:13:11.801597831Z" level=info msg="StartContainer for \"74344e0c3d38406050fe10d5f558e739dea4c76e78c2d51658ec0f6ba766755a\" returns successfully" Sep 9 05:13:11.807658 containerd[2018]: time="2025-09-09T05:13:11.807308043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 05:13:11.867204 containerd[2018]: time="2025-09-09T05:13:11.867148071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" id:\"68bbb24c4f16f0096c986247ac486daacbb29b1f1d37e1abdeb394b43887994e\" pid:5886 exit_status:1 exited_at:{seconds:1757394791 nanos:865354047}" Sep 9 05:13:12.102800 systemd[1]: Started sshd@12-172.31.30.120:22-147.75.109.163:55776.service - OpenSSH per-connection server daemon (147.75.109.163:55776). Sep 9 05:13:12.346743 sshd[5931]: Accepted publickey for core from 147.75.109.163 port 55776 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:12.350976 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:12.362629 systemd-logind[1998]: New session 13 of user core. Sep 9 05:13:12.372307 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:13:12.733852 sshd[5934]: Connection closed by 147.75.109.163 port 55776 Sep 9 05:13:12.734889 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:12.750740 systemd[1]: sshd@12-172.31.30.120:22-147.75.109.163:55776.service: Deactivated successfully. Sep 9 05:13:12.757775 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:13:12.763723 systemd-logind[1998]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:13:12.767376 systemd-logind[1998]: Removed session 13. Sep 9 05:13:14.191726 containerd[2018]: time="2025-09-09T05:13:14.190494939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:14.193998 containerd[2018]: time="2025-09-09T05:13:14.193941255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 05:13:14.195112 containerd[2018]: time="2025-09-09T05:13:14.195055851Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:14.202278 containerd[2018]: time="2025-09-09T05:13:14.202219023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:13:14.206526 containerd[2018]: time="2025-09-09T05:13:14.206326647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.398953588s" Sep 9 05:13:14.206526 containerd[2018]: time="2025-09-09T05:13:14.206387655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 05:13:14.214082 containerd[2018]: time="2025-09-09T05:13:14.213456447Z" level=info msg="CreateContainer within sandbox \"52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 05:13:14.227450 containerd[2018]: time="2025-09-09T05:13:14.227394843Z" level=info msg="Container 27537981219ec6809e310269b64459505a9aad1aa53de350b1071609c94f9f59: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:13:14.256913 containerd[2018]: time="2025-09-09T05:13:14.256858347Z" level=info msg="CreateContainer within sandbox \"52f7590363bf36682eccbde8db12b30d8fdf2fe2b6ee4f9692fdb306fc99cb24\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"27537981219ec6809e310269b64459505a9aad1aa53de350b1071609c94f9f59\"" Sep 9 05:13:14.259638 containerd[2018]: time="2025-09-09T05:13:14.259594131Z" level=info msg="StartContainer for \"27537981219ec6809e310269b64459505a9aad1aa53de350b1071609c94f9f59\"" Sep 9 05:13:14.264834 containerd[2018]: time="2025-09-09T05:13:14.264366939Z" level=info msg="connecting to shim 27537981219ec6809e310269b64459505a9aad1aa53de350b1071609c94f9f59" address="unix:///run/containerd/s/59907dd13e37871564cf5b0f7e0d092339dda760f9d3b826681ce7bc39a2a3eb" protocol=ttrpc version=3 Sep 9 05:13:14.340352 systemd[1]: Started cri-containerd-27537981219ec6809e310269b64459505a9aad1aa53de350b1071609c94f9f59.scope - libcontainer container 27537981219ec6809e310269b64459505a9aad1aa53de350b1071609c94f9f59. Sep 9 05:13:14.582221 containerd[2018]: time="2025-09-09T05:13:14.579536621Z" level=info msg="StartContainer for \"27537981219ec6809e310269b64459505a9aad1aa53de350b1071609c94f9f59\" returns successfully" Sep 9 05:13:14.952343 kubelet[3513]: I0909 05:13:14.952232 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vxj2z" podStartSLOduration=29.623987223 podStartE2EDuration="56.952208983s" podCreationTimestamp="2025-09-09 05:12:18 +0000 UTC" firstStartedPulling="2025-09-09 05:12:46.880404771 +0000 UTC m=+58.073622025" lastFinishedPulling="2025-09-09 05:13:14.208626519 +0000 UTC m=+85.401843785" observedRunningTime="2025-09-09 05:13:14.951848947 +0000 UTC m=+86.145066285" watchObservedRunningTime="2025-09-09 05:13:14.952208983 +0000 UTC m=+86.145426237" Sep 9 05:13:14.954135 kubelet[3513]: I0909 05:13:14.952537 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-fwmcq" podStartSLOduration=34.335292847 podStartE2EDuration="56.952525363s" podCreationTimestamp="2025-09-09 05:12:18 +0000 UTC" firstStartedPulling="2025-09-09 05:12:46.413059429 +0000 UTC m=+57.606276695" lastFinishedPulling="2025-09-09 05:13:09.030291957 +0000 UTC m=+80.223509211" observedRunningTime="2025-09-09 05:13:09.957860894 +0000 UTC m=+81.151078160" watchObservedRunningTime="2025-09-09 05:13:14.952525363 +0000 UTC m=+86.145742641" Sep 9 05:13:15.281578 kubelet[3513]: I0909 05:13:15.281424 3513 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 05:13:15.281578 kubelet[3513]: I0909 05:13:15.281480 3513 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 05:13:17.779245 systemd[1]: Started sshd@13-172.31.30.120:22-147.75.109.163:55778.service - OpenSSH per-connection server daemon (147.75.109.163:55778). Sep 9 05:13:18.054368 sshd[5984]: Accepted publickey for core from 147.75.109.163 port 55778 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:18.062864 sshd-session[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:18.081648 systemd-logind[1998]: New session 14 of user core. Sep 9 05:13:18.089410 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:13:18.419939 containerd[2018]: time="2025-09-09T05:13:18.419630360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\" id:\"ebe863c0fc0d55e5586822b5dc59f452028b5ebb27eb4528eda79dd5006539b7\" pid:6000 exited_at:{seconds:1757394798 nanos:418793432}" Sep 9 05:13:18.450749 sshd[6010]: Connection closed by 147.75.109.163 port 55778 Sep 9 05:13:18.450419 sshd-session[5984]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:18.462397 systemd[1]: sshd@13-172.31.30.120:22-147.75.109.163:55778.service: Deactivated successfully. Sep 9 05:13:18.471213 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:13:18.474323 systemd-logind[1998]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:13:18.480197 systemd-logind[1998]: Removed session 14. Sep 9 05:13:18.758620 containerd[2018]: time="2025-09-09T05:13:18.758546206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\" id:\"87a2e29b34797fb9fd8fed77f2765242e61f4698f877de8cecc447c3eeb5e819\" pid:6034 exited_at:{seconds:1757394798 nanos:757587634}" Sep 9 05:13:23.488960 systemd[1]: Started sshd@14-172.31.30.120:22-147.75.109.163:37778.service - OpenSSH per-connection server daemon (147.75.109.163:37778). Sep 9 05:13:23.693793 sshd[6053]: Accepted publickey for core from 147.75.109.163 port 37778 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:23.697282 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:23.709788 systemd-logind[1998]: New session 15 of user core. Sep 9 05:13:23.720389 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:13:23.917498 kubelet[3513]: I0909 05:13:23.917343 3513 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:13:24.087454 sshd[6056]: Connection closed by 147.75.109.163 port 37778 Sep 9 05:13:24.090587 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:24.103375 systemd[1]: sshd@14-172.31.30.120:22-147.75.109.163:37778.service: Deactivated successfully. Sep 9 05:13:24.111618 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:13:24.115182 systemd-logind[1998]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:13:24.119132 systemd-logind[1998]: Removed session 15. Sep 9 05:13:25.932842 containerd[2018]: time="2025-09-09T05:13:25.932475017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" id:\"b162153bcd868d984884d47c6bd1cfdfcbdc36cdc44b2c90450264838097f910\" pid:6083 exited_at:{seconds:1757394805 nanos:932074409}" Sep 9 05:13:29.133513 systemd[1]: Started sshd@15-172.31.30.120:22-147.75.109.163:37794.service - OpenSSH per-connection server daemon (147.75.109.163:37794). Sep 9 05:13:29.340175 sshd[6099]: Accepted publickey for core from 147.75.109.163 port 37794 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:29.343007 sshd-session[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:29.356367 systemd-logind[1998]: New session 16 of user core. Sep 9 05:13:29.368971 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:13:29.697777 sshd[6102]: Connection closed by 147.75.109.163 port 37794 Sep 9 05:13:29.698976 sshd-session[6099]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:29.712754 systemd-logind[1998]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:13:29.712923 systemd[1]: sshd@15-172.31.30.120:22-147.75.109.163:37794.service: Deactivated successfully. Sep 9 05:13:29.721235 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:13:29.742798 systemd-logind[1998]: Removed session 16. Sep 9 05:13:29.744502 systemd[1]: Started sshd@16-172.31.30.120:22-147.75.109.163:37808.service - OpenSSH per-connection server daemon (147.75.109.163:37808). Sep 9 05:13:29.959983 sshd[6114]: Accepted publickey for core from 147.75.109.163 port 37808 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:29.964513 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:29.976811 systemd-logind[1998]: New session 17 of user core. Sep 9 05:13:29.985301 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:13:30.708005 sshd[6117]: Connection closed by 147.75.109.163 port 37808 Sep 9 05:13:30.708647 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:30.720960 systemd-logind[1998]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:13:30.723217 systemd[1]: sshd@16-172.31.30.120:22-147.75.109.163:37808.service: Deactivated successfully. Sep 9 05:13:30.730361 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:13:30.752015 systemd-logind[1998]: Removed session 17. Sep 9 05:13:30.757313 systemd[1]: Started sshd@17-172.31.30.120:22-147.75.109.163:45062.service - OpenSSH per-connection server daemon (147.75.109.163:45062). Sep 9 05:13:30.984794 sshd[6127]: Accepted publickey for core from 147.75.109.163 port 45062 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:30.988375 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:31.001525 systemd-logind[1998]: New session 18 of user core. Sep 9 05:13:31.008736 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:13:32.539095 sshd[6130]: Connection closed by 147.75.109.163 port 45062 Sep 9 05:13:32.539995 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:32.551556 systemd[1]: sshd@17-172.31.30.120:22-147.75.109.163:45062.service: Deactivated successfully. Sep 9 05:13:32.563380 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:13:32.571352 systemd-logind[1998]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:13:32.596470 systemd[1]: Started sshd@18-172.31.30.120:22-147.75.109.163:45070.service - OpenSSH per-connection server daemon (147.75.109.163:45070). Sep 9 05:13:32.601701 systemd-logind[1998]: Removed session 18. Sep 9 05:13:32.825475 sshd[6147]: Accepted publickey for core from 147.75.109.163 port 45070 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:32.827420 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:32.842078 systemd-logind[1998]: New session 19 of user core. Sep 9 05:13:32.849313 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:13:33.573644 sshd[6152]: Connection closed by 147.75.109.163 port 45070 Sep 9 05:13:33.574132 sshd-session[6147]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:33.587836 systemd[1]: sshd@18-172.31.30.120:22-147.75.109.163:45070.service: Deactivated successfully. Sep 9 05:13:33.597719 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:13:33.601911 systemd-logind[1998]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:13:33.621469 systemd[1]: Started sshd@19-172.31.30.120:22-147.75.109.163:45072.service - OpenSSH per-connection server daemon (147.75.109.163:45072). Sep 9 05:13:33.628695 systemd-logind[1998]: Removed session 19. Sep 9 05:13:33.826469 sshd[6162]: Accepted publickey for core from 147.75.109.163 port 45072 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:33.830042 sshd-session[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:33.844156 systemd-logind[1998]: New session 20 of user core. Sep 9 05:13:33.851338 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:13:34.145579 sshd[6165]: Connection closed by 147.75.109.163 port 45072 Sep 9 05:13:34.148349 sshd-session[6162]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:34.155218 systemd-logind[1998]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:13:34.156531 systemd[1]: sshd@19-172.31.30.120:22-147.75.109.163:45072.service: Deactivated successfully. Sep 9 05:13:34.163893 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:13:34.171821 systemd-logind[1998]: Removed session 20. Sep 9 05:13:39.191789 systemd[1]: Started sshd@20-172.31.30.120:22-147.75.109.163:45080.service - OpenSSH per-connection server daemon (147.75.109.163:45080). Sep 9 05:13:39.412803 sshd[6177]: Accepted publickey for core from 147.75.109.163 port 45080 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:39.416296 sshd-session[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:39.425764 systemd-logind[1998]: New session 21 of user core. Sep 9 05:13:39.435672 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:13:39.714391 sshd[6180]: Connection closed by 147.75.109.163 port 45080 Sep 9 05:13:39.715294 sshd-session[6177]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:39.725198 systemd[1]: sshd@20-172.31.30.120:22-147.75.109.163:45080.service: Deactivated successfully. Sep 9 05:13:39.734114 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:13:39.738123 systemd-logind[1998]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:13:39.745261 systemd-logind[1998]: Removed session 21. Sep 9 05:13:41.236587 containerd[2018]: time="2025-09-09T05:13:41.236418305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" id:\"8554f42e927a9954bcd45bf21df8152452d199c1abfbae97871768e64edef355\" pid:6207 exited_at:{seconds:1757394821 nanos:235805753}" Sep 9 05:13:43.606082 containerd[2018]: time="2025-09-09T05:13:43.605138217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\" id:\"1e74d2eebe5e5a3a38a2f10ff29c749e03d8303bdddea301c3c1f9c3a3ab2ee8\" pid:6235 exited_at:{seconds:1757394823 nanos:603917385}" Sep 9 05:13:44.755966 systemd[1]: Started sshd@21-172.31.30.120:22-147.75.109.163:32860.service - OpenSSH per-connection server daemon (147.75.109.163:32860). Sep 9 05:13:44.975426 sshd[6245]: Accepted publickey for core from 147.75.109.163 port 32860 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:44.978996 sshd-session[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:44.991957 systemd-logind[1998]: New session 22 of user core. Sep 9 05:13:45.002428 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:13:45.298778 sshd[6248]: Connection closed by 147.75.109.163 port 32860 Sep 9 05:13:45.302328 sshd-session[6245]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:45.311761 systemd[1]: sshd@21-172.31.30.120:22-147.75.109.163:32860.service: Deactivated successfully. Sep 9 05:13:45.318674 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:13:45.324621 systemd-logind[1998]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:13:45.328703 systemd-logind[1998]: Removed session 22. Sep 9 05:13:48.025050 containerd[2018]: time="2025-09-09T05:13:48.024925211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\" id:\"a3b35c93d4312689cfa82db5a8a41c0ff635d08f9fdff8b61eb6579d601d6211\" pid:6271 exited_at:{seconds:1757394828 nanos:24543563}" Sep 9 05:13:48.828433 containerd[2018]: time="2025-09-09T05:13:48.828319083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\" id:\"977571bb53912a53ef045be039a4ff2b0c234835f1431c711eceaa53c0db71c8\" pid:6295 exited_at:{seconds:1757394828 nanos:827736459}" Sep 9 05:13:50.345755 systemd[1]: Started sshd@22-172.31.30.120:22-147.75.109.163:46608.service - OpenSSH per-connection server daemon (147.75.109.163:46608). Sep 9 05:13:50.561647 sshd[6307]: Accepted publickey for core from 147.75.109.163 port 46608 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:50.565205 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:50.578903 systemd-logind[1998]: New session 23 of user core. Sep 9 05:13:50.585584 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:13:50.874767 sshd[6310]: Connection closed by 147.75.109.163 port 46608 Sep 9 05:13:50.876556 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:50.889015 systemd[1]: sshd@22-172.31.30.120:22-147.75.109.163:46608.service: Deactivated successfully. Sep 9 05:13:50.889637 systemd-logind[1998]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:13:50.897377 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:13:50.905135 systemd-logind[1998]: Removed session 23. Sep 9 05:13:55.914474 systemd[1]: Started sshd@23-172.31.30.120:22-147.75.109.163:46616.service - OpenSSH per-connection server daemon (147.75.109.163:46616). Sep 9 05:13:56.126750 sshd[6321]: Accepted publickey for core from 147.75.109.163 port 46616 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:13:56.129972 sshd-session[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:13:56.139691 systemd-logind[1998]: New session 24 of user core. Sep 9 05:13:56.149900 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:13:56.442656 sshd[6324]: Connection closed by 147.75.109.163 port 46616 Sep 9 05:13:56.443374 sshd-session[6321]: pam_unix(sshd:session): session closed for user core Sep 9 05:13:56.454753 systemd[1]: sshd@23-172.31.30.120:22-147.75.109.163:46616.service: Deactivated successfully. Sep 9 05:13:56.455615 systemd-logind[1998]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:13:56.462372 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:13:56.467467 systemd-logind[1998]: Removed session 24. Sep 9 05:14:01.482856 systemd[1]: Started sshd@24-172.31.30.120:22-147.75.109.163:60392.service - OpenSSH per-connection server daemon (147.75.109.163:60392). Sep 9 05:14:01.692385 sshd[6338]: Accepted publickey for core from 147.75.109.163 port 60392 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:14:01.696119 sshd-session[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:14:01.709907 systemd-logind[1998]: New session 25 of user core. Sep 9 05:14:01.716335 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:14:02.032995 sshd[6341]: Connection closed by 147.75.109.163 port 60392 Sep 9 05:14:02.032870 sshd-session[6338]: pam_unix(sshd:session): session closed for user core Sep 9 05:14:02.044863 systemd-logind[1998]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:14:02.044899 systemd[1]: sshd@24-172.31.30.120:22-147.75.109.163:60392.service: Deactivated successfully. Sep 9 05:14:02.054905 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:14:02.059868 systemd-logind[1998]: Removed session 25. Sep 9 05:14:07.069910 systemd[1]: Started sshd@25-172.31.30.120:22-147.75.109.163:60398.service - OpenSSH per-connection server daemon (147.75.109.163:60398). Sep 9 05:14:07.273054 sshd[6362]: Accepted publickey for core from 147.75.109.163 port 60398 ssh2: RSA SHA256:2zofKxj4FUHLlH333Y3QwKckI2YUnW4mC4hWB8zCARI Sep 9 05:14:07.274720 sshd-session[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:14:07.283698 systemd-logind[1998]: New session 26 of user core. Sep 9 05:14:07.293292 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 05:14:07.583457 sshd[6365]: Connection closed by 147.75.109.163 port 60398 Sep 9 05:14:07.582482 sshd-session[6362]: pam_unix(sshd:session): session closed for user core Sep 9 05:14:07.594446 systemd[1]: sshd@25-172.31.30.120:22-147.75.109.163:60398.service: Deactivated successfully. Sep 9 05:14:07.605219 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 05:14:07.612528 systemd-logind[1998]: Session 26 logged out. Waiting for processes to exit. Sep 9 05:14:07.616219 systemd-logind[1998]: Removed session 26. Sep 9 05:14:11.107103 containerd[2018]: time="2025-09-09T05:14:11.106900174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" id:\"fa8adac761e5ad024f4f908e55b396731dec3b58d287d5c08040da540270b801\" pid:6388 exited_at:{seconds:1757394851 nanos:104315038}" Sep 9 05:14:17.965601 containerd[2018]: time="2025-09-09T05:14:17.965509112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8dbe4da0cef4ac8b7957d942dade62966441426ef4902926b133d71fb1249f27\" id:\"9db09a7cd6fb23e6b89c21151d34c2757c22817e87a4baa236a075ccca7a1ea0\" pid:6431 exited_at:{seconds:1757394857 nanos:964731788}" Sep 9 05:14:18.718796 containerd[2018]: time="2025-09-09T05:14:18.718735111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04aa5969b3b2da5c3079fc9aaa80885ca01e53db6c5dd862b6c75d288586015e\" id:\"33c4e4d5cc3e5016bb3088b22c72ccd8659a04f492fb4757a0f01cbdc8556667\" pid:6454 exited_at:{seconds:1757394858 nanos:718385467}" Sep 9 05:14:22.007886 kubelet[3513]: E0909 05:14:22.007793 3513 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": context deadline exceeded" Sep 9 05:14:22.325280 systemd[1]: cri-containerd-87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f.scope: Deactivated successfully. Sep 9 05:14:22.325878 systemd[1]: cri-containerd-87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f.scope: Consumed 33.334s CPU time, 119.7M memory peak, 192K read from disk. Sep 9 05:14:22.330900 containerd[2018]: time="2025-09-09T05:14:22.330603681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\" id:\"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\" pid:3832 exit_status:1 exited_at:{seconds:1757394862 nanos:328758825}" Sep 9 05:14:22.330900 containerd[2018]: time="2025-09-09T05:14:22.330726477Z" level=info msg="received exit event container_id:\"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\" id:\"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\" pid:3832 exit_status:1 exited_at:{seconds:1757394862 nanos:328758825}" Sep 9 05:14:22.382707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f-rootfs.mount: Deactivated successfully. Sep 9 05:14:22.672822 systemd[1]: cri-containerd-abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89.scope: Deactivated successfully. Sep 9 05:14:22.674524 systemd[1]: cri-containerd-abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89.scope: Consumed 5.547s CPU time, 64.2M memory peak, 128K read from disk. Sep 9 05:14:22.678983 containerd[2018]: time="2025-09-09T05:14:22.678914999Z" level=info msg="received exit event container_id:\"abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89\" id:\"abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89\" pid:3188 exit_status:1 exited_at:{seconds:1757394862 nanos:677409287}" Sep 9 05:14:22.682784 containerd[2018]: time="2025-09-09T05:14:22.682714403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89\" id:\"abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89\" pid:3188 exit_status:1 exited_at:{seconds:1757394862 nanos:677409287}" Sep 9 05:14:22.739953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89-rootfs.mount: Deactivated successfully. Sep 9 05:14:23.218219 kubelet[3513]: I0909 05:14:23.217767 3513 scope.go:117] "RemoveContainer" containerID="abce1a60209547f4fb30d0e89fd884827a5f368d99f2493c1a1197919c8d0d89" Sep 9 05:14:23.221705 kubelet[3513]: I0909 05:14:23.221138 3513 scope.go:117] "RemoveContainer" containerID="87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f" Sep 9 05:14:23.226598 containerd[2018]: time="2025-09-09T05:14:23.226514902Z" level=info msg="CreateContainer within sandbox \"ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 9 05:14:23.227909 containerd[2018]: time="2025-09-09T05:14:23.227841526Z" level=info msg="CreateContainer within sandbox \"c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 9 05:14:23.245093 containerd[2018]: time="2025-09-09T05:14:23.244949902Z" level=info msg="Container 1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:14:23.259854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102592715.mount: Deactivated successfully. Sep 9 05:14:23.271827 containerd[2018]: time="2025-09-09T05:14:23.270664450Z" level=info msg="Container ce1b6f08b89eb88fca4b34b4b9e95515c35b8c92e2e99daf2b613f72ddc485c5: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:14:23.272654 containerd[2018]: time="2025-09-09T05:14:23.272589790Z" level=info msg="CreateContainer within sandbox \"ee7b9e4c53a1e2dc64149d5a109d9c4cc43aa90ad04c0683d981a7f28bfe631d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38\"" Sep 9 05:14:23.274226 containerd[2018]: time="2025-09-09T05:14:23.274162354Z" level=info msg="StartContainer for \"1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38\"" Sep 9 05:14:23.280127 containerd[2018]: time="2025-09-09T05:14:23.279982822Z" level=info msg="connecting to shim 1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38" address="unix:///run/containerd/s/370e936ddafab9915225570a7b4b13ce6afe24f07bd917fe5e0e7a3f66dc75cc" protocol=ttrpc version=3 Sep 9 05:14:23.295523 containerd[2018]: time="2025-09-09T05:14:23.295444606Z" level=info msg="CreateContainer within sandbox \"c5d8d9848216335d24c6948ac6ecaba515f298270cc6b28ff5441bf2740a1342\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ce1b6f08b89eb88fca4b34b4b9e95515c35b8c92e2e99daf2b613f72ddc485c5\"" Sep 9 05:14:23.296674 containerd[2018]: time="2025-09-09T05:14:23.296358910Z" level=info msg="StartContainer for \"ce1b6f08b89eb88fca4b34b4b9e95515c35b8c92e2e99daf2b613f72ddc485c5\"" Sep 9 05:14:23.300900 containerd[2018]: time="2025-09-09T05:14:23.300744838Z" level=info msg="connecting to shim ce1b6f08b89eb88fca4b34b4b9e95515c35b8c92e2e99daf2b613f72ddc485c5" address="unix:///run/containerd/s/60a3a7c173521e9962c762fd773054e34f53e1d9a5e71be0554e4f68583f3b71" protocol=ttrpc version=3 Sep 9 05:14:23.319335 systemd[1]: Started cri-containerd-1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38.scope - libcontainer container 1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38. Sep 9 05:14:23.364301 systemd[1]: Started cri-containerd-ce1b6f08b89eb88fca4b34b4b9e95515c35b8c92e2e99daf2b613f72ddc485c5.scope - libcontainer container ce1b6f08b89eb88fca4b34b4b9e95515c35b8c92e2e99daf2b613f72ddc485c5. Sep 9 05:14:23.447307 containerd[2018]: time="2025-09-09T05:14:23.447243959Z" level=info msg="StartContainer for \"1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38\" returns successfully" Sep 9 05:14:23.495777 containerd[2018]: time="2025-09-09T05:14:23.494773763Z" level=info msg="StartContainer for \"ce1b6f08b89eb88fca4b34b4b9e95515c35b8c92e2e99daf2b613f72ddc485c5\" returns successfully" Sep 9 05:14:25.724756 containerd[2018]: time="2025-09-09T05:14:25.724699898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" id:\"7449ea10d9cb72f9e89f39c4f3e8a9493fbc40214741fd68c74be60cd87b6377\" pid:6565 exited_at:{seconds:1757394865 nanos:723899246}" Sep 9 05:14:27.367121 systemd[1]: cri-containerd-06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44.scope: Deactivated successfully. Sep 9 05:14:27.369132 systemd[1]: cri-containerd-06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44.scope: Consumed 5.041s CPU time, 20M memory peak. Sep 9 05:14:27.373241 containerd[2018]: time="2025-09-09T05:14:27.373008326Z" level=info msg="received exit event container_id:\"06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44\" id:\"06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44\" pid:3166 exit_status:1 exited_at:{seconds:1757394867 nanos:371997302}" Sep 9 05:14:27.374927 containerd[2018]: time="2025-09-09T05:14:27.374136374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44\" id:\"06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44\" pid:3166 exit_status:1 exited_at:{seconds:1757394867 nanos:371997302}" Sep 9 05:14:27.418246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44-rootfs.mount: Deactivated successfully. Sep 9 05:14:28.253673 kubelet[3513]: I0909 05:14:28.253606 3513 scope.go:117] "RemoveContainer" containerID="06dc8c3315b2c96fa813a749e9cf0164a1c558f33e7e7427249efdbee4e61b44" Sep 9 05:14:28.258082 containerd[2018]: time="2025-09-09T05:14:28.257589675Z" level=info msg="CreateContainer within sandbox \"b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 9 05:14:28.275312 containerd[2018]: time="2025-09-09T05:14:28.275257227Z" level=info msg="Container adf5f2ad85c6cc1a0c8bc4d1ea640a372596225eeda793b88b84ecb1759cedec: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:14:28.294853 containerd[2018]: time="2025-09-09T05:14:28.294749235Z" level=info msg="CreateContainer within sandbox \"b70dc7fe218895bfe86985dd5a2526ee0d704be4ed7e7e9a6dc8beaad6ad1905\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"adf5f2ad85c6cc1a0c8bc4d1ea640a372596225eeda793b88b84ecb1759cedec\"" Sep 9 05:14:28.295566 containerd[2018]: time="2025-09-09T05:14:28.295527651Z" level=info msg="StartContainer for \"adf5f2ad85c6cc1a0c8bc4d1ea640a372596225eeda793b88b84ecb1759cedec\"" Sep 9 05:14:28.298066 containerd[2018]: time="2025-09-09T05:14:28.297966303Z" level=info msg="connecting to shim adf5f2ad85c6cc1a0c8bc4d1ea640a372596225eeda793b88b84ecb1759cedec" address="unix:///run/containerd/s/b4ca5c7fca1c5d159240496ae899a55e6011e3e0266402015e2edf16871d0421" protocol=ttrpc version=3 Sep 9 05:14:28.339331 systemd[1]: Started cri-containerd-adf5f2ad85c6cc1a0c8bc4d1ea640a372596225eeda793b88b84ecb1759cedec.scope - libcontainer container adf5f2ad85c6cc1a0c8bc4d1ea640a372596225eeda793b88b84ecb1759cedec. Sep 9 05:14:28.422372 containerd[2018]: time="2025-09-09T05:14:28.422230036Z" level=info msg="StartContainer for \"adf5f2ad85c6cc1a0c8bc4d1ea640a372596225eeda793b88b84ecb1759cedec\" returns successfully" Sep 9 05:14:32.009304 kubelet[3513]: E0909 05:14:32.008846 3513 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 05:14:34.910948 systemd[1]: cri-containerd-1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38.scope: Deactivated successfully. Sep 9 05:14:34.912396 containerd[2018]: time="2025-09-09T05:14:34.912322896Z" level=info msg="received exit event container_id:\"1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38\" id:\"1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38\" pid:6509 exit_status:1 exited_at:{seconds:1757394874 nanos:910593060}" Sep 9 05:14:34.914616 containerd[2018]: time="2025-09-09T05:14:34.914518968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38\" id:\"1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38\" pid:6509 exit_status:1 exited_at:{seconds:1757394874 nanos:910593060}" Sep 9 05:14:34.962744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38-rootfs.mount: Deactivated successfully. Sep 9 05:14:35.287072 kubelet[3513]: I0909 05:14:35.286084 3513 scope.go:117] "RemoveContainer" containerID="87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f" Sep 9 05:14:35.287072 kubelet[3513]: I0909 05:14:35.286690 3513 scope.go:117] "RemoveContainer" containerID="1630e18fdeba85372ac1dd2892138ce9c002d3927756a9b8681edb83bd8ccb38" Sep 9 05:14:35.287732 kubelet[3513]: E0909 05:14:35.287287 3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-jswz5_tigera-operator(2fd5ca94-f89f-48fc-b25f-0fc758f68fb0)\"" pod="tigera-operator/tigera-operator-755d956888-jswz5" podUID="2fd5ca94-f89f-48fc-b25f-0fc758f68fb0" Sep 9 05:14:35.289685 containerd[2018]: time="2025-09-09T05:14:35.289628026Z" level=info msg="RemoveContainer for \"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\"" Sep 9 05:14:35.299265 containerd[2018]: time="2025-09-09T05:14:35.299185150Z" level=info msg="RemoveContainer for \"87a87a0147d98029093c7aa9d67537beed71013487574bc212d781bd524f712f\" returns successfully" Sep 9 05:14:41.034536 containerd[2018]: time="2025-09-09T05:14:41.034427810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2f5b7664b682d70cc94852649a9ee89412951708307d76b96ab5b1082e3f623\" id:\"edf0864ff69ab595b00b1748a400026c3dd0c136dff7b8807f385d87aa8ce018\" pid:6648 exited_at:{seconds:1757394881 nanos:32649902}" Sep 9 05:14:42.009738 kubelet[3513]: E0909 05:14:42.009350 3513 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-120?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"